Mar 08 00:22:57 crc systemd[1]: Starting Kubernetes Kubelet... Mar 08 00:22:57 crc restorecon[4746]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:57 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 08 00:22:58 crc restorecon[4746]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 08 00:22:58 crc kubenswrapper[4762]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.980655 4762 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987491 4762 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987913 4762 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987931 4762 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987947 4762 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987961 4762 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987972 4762 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987982 4762 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.987992 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988003 4762 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988014 4762 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988025 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988035 4762 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988046 4762 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988056 4762 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988066 4762 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988076 4762 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988085 4762 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988096 4762 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988105 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988115 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988139 4762 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988150 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988159 4762 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988170 4762 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988181 4762 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988190 4762 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988202 4762 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988212 4762 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988222 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988232 4762 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988242 4762 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988253 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988266 4762 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988280 4762 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988292 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988305 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988316 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988326 4762 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988335 4762 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988345 4762 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988356 4762 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988370 4762 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988382 4762 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988393 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988405 4762 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988417 4762 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988427 4762 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988438 4762 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988449 4762 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988459 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988470 4762 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988480 4762 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988496 4762 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988509 4762 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988519 4762 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988530 4762 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988540 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988550 4762 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988559 4762 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988569 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988578 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988588 4762 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988597 4762 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988610 4762 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988620 4762 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988631 4762 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988642 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988654 4762 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988663 4762 feature_gate.go:330] unrecognized feature gate: Example Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988673 4762 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.988682 4762 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989835 4762 flags.go:64] FLAG: --address="0.0.0.0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989865 4762 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989885 4762 flags.go:64] FLAG: --anonymous-auth="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989899 4762 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989914 4762 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989926 4762 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989942 4762 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989956 4762 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989968 4762 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989979 4762 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.989994 4762 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990006 4762 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990018 4762 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990029 4762 flags.go:64] FLAG: --cgroup-root="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990041 4762 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990052 4762 flags.go:64] FLAG: --client-ca-file="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990064 4762 flags.go:64] FLAG: --cloud-config="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990075 4762 flags.go:64] FLAG: --cloud-provider="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990086 4762 flags.go:64] FLAG: --cluster-dns="[]" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990103 4762 flags.go:64] FLAG: --cluster-domain="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990114 4762 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990126 4762 flags.go:64] FLAG: --config-dir="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990137 4762 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990149 4762 flags.go:64] FLAG: --container-log-max-files="5" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990164 4762 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990175 4762 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990187 4762 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990200 4762 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990212 4762 flags.go:64] FLAG: --contention-profiling="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990223 4762 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990234 4762 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990246 4762 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990267 4762 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990282 4762 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990294 4762 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990305 4762 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990316 4762 flags.go:64] FLAG: --enable-load-reader="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990328 4762 flags.go:64] FLAG: --enable-server="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990340 4762 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990354 4762 flags.go:64] FLAG: --event-burst="100" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990365 4762 flags.go:64] FLAG: --event-qps="50" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990376 4762 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990388 4762 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990400 4762 flags.go:64] FLAG: --eviction-hard="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990413 4762 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990425 4762 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990436 4762 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990448 4762 flags.go:64] FLAG: --eviction-soft="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990460 4762 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990471 4762 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990484 4762 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990495 4762 flags.go:64] FLAG: --experimental-mounter-path="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990506 4762 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990518 4762 flags.go:64] FLAG: --fail-swap-on="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990529 4762 flags.go:64] FLAG: --feature-gates="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990542 4762 flags.go:64] FLAG: --file-check-frequency="20s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990553 4762 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990564 4762 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990576 4762 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990588 4762 flags.go:64] FLAG: --healthz-port="10248" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990599 4762 flags.go:64] FLAG: --help="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990611 4762 flags.go:64] FLAG: --hostname-override="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990622 4762 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990634 4762 flags.go:64] FLAG: --http-check-frequency="20s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990645 4762 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990656 4762 flags.go:64] FLAG: --image-credential-provider-config="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990667 4762 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990678 4762 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990692 4762 flags.go:64] FLAG: --image-service-endpoint="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990703 4762 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990714 4762 flags.go:64] FLAG: --kube-api-burst="100" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990725 4762 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990737 4762 flags.go:64] FLAG: --kube-api-qps="50" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990748 4762 flags.go:64] FLAG: --kube-reserved="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990791 4762 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990803 4762 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990815 4762 flags.go:64] FLAG: --kubelet-cgroups="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990826 4762 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990838 4762 flags.go:64] FLAG: --lock-file="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990849 4762 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990860 4762 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990871 4762 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990889 4762 flags.go:64] FLAG: --log-json-split-stream="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990901 4762 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990912 4762 flags.go:64] FLAG: --log-text-split-stream="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990923 4762 flags.go:64] FLAG: --logging-format="text" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990934 4762 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990946 4762 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990958 4762 flags.go:64] FLAG: --manifest-url="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990970 4762 flags.go:64] FLAG: --manifest-url-header="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990985 4762 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.990997 4762 flags.go:64] FLAG: --max-open-files="1000000" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991010 4762 flags.go:64] FLAG: --max-pods="110" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991021 4762 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991033 4762 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991044 4762 flags.go:64] FLAG: --memory-manager-policy="None" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991055 4762 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991066 4762 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991077 4762 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991089 4762 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991123 4762 flags.go:64] FLAG: --node-status-max-images="50" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991134 4762 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991146 4762 flags.go:64] FLAG: --oom-score-adj="-999" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991157 4762 flags.go:64] FLAG: --pod-cidr="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991170 4762 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991188 4762 flags.go:64] FLAG: --pod-manifest-path="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991200 4762 flags.go:64] FLAG: --pod-max-pids="-1" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991211 4762 flags.go:64] FLAG: --pods-per-core="0" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991223 4762 flags.go:64] FLAG: --port="10250" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991235 4762 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991246 4762 flags.go:64] FLAG: --provider-id="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991257 4762 flags.go:64] FLAG: --qos-reserved="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991268 4762 flags.go:64] FLAG: --read-only-port="10255" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991281 4762 flags.go:64] FLAG: --register-node="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991293 4762 flags.go:64] FLAG: --register-schedulable="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991305 4762 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991324 4762 flags.go:64] FLAG: --registry-burst="10" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991336 4762 flags.go:64] FLAG: --registry-qps="5" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991347 4762 flags.go:64] FLAG: --reserved-cpus="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991358 4762 flags.go:64] FLAG: --reserved-memory="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991371 4762 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991383 4762 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991396 4762 flags.go:64] FLAG: --rotate-certificates="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991408 4762 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991419 4762 flags.go:64] FLAG: --runonce="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991430 4762 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991442 4762 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991454 4762 flags.go:64] FLAG: --seccomp-default="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991465 4762 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991477 4762 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991489 4762 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991501 4762 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991513 4762 flags.go:64] FLAG: --storage-driver-password="root" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991554 4762 flags.go:64] FLAG: --storage-driver-secure="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991566 4762 flags.go:64] FLAG: --storage-driver-table="stats" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991578 4762 flags.go:64] FLAG: --storage-driver-user="root" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991589 4762 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991601 4762 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991614 4762 flags.go:64] FLAG: --system-cgroups="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991625 4762 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991657 4762 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991676 4762 flags.go:64] FLAG: --tls-cert-file="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991687 4762 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991702 4762 flags.go:64] FLAG: --tls-min-version="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991714 4762 flags.go:64] FLAG: --tls-private-key-file="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991725 4762 flags.go:64] FLAG: --topology-manager-policy="none" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991736 4762 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991747 4762 flags.go:64] FLAG: --topology-manager-scope="container" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991793 4762 flags.go:64] FLAG: --v="2" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991809 4762 flags.go:64] FLAG: --version="false" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991824 4762 flags.go:64] FLAG: --vmodule="" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991837 4762 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 08 00:22:58 crc kubenswrapper[4762]: I0308 00:22:58.991849 4762 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992102 4762 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992115 4762 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992125 4762 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992135 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992145 4762 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992155 4762 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992165 4762 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992175 4762 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992184 4762 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992194 4762 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992204 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992213 4762 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992223 4762 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 00:22:58 crc kubenswrapper[4762]: W0308 00:22:58.992233 4762 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992246 4762 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992257 4762 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992269 4762 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992285 4762 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992298 4762 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992312 4762 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992328 4762 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992341 4762 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992354 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992383 4762 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992395 4762 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992409 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992419 4762 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992429 4762 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992439 4762 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992449 4762 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992459 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992470 4762 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992480 4762 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992490 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992502 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992513 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992525 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992537 4762 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992548 4762 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992558 4762 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992568 4762 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992578 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992587 4762 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992726 4762 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992745 4762 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992754 4762 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992804 4762 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992814 4762 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992822 4762 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992835 4762 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992844 4762 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992853 4762 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992892 4762 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992902 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992911 4762 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992919 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992927 4762 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992935 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992943 4762 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992980 4762 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992989 4762 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.992997 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993005 4762 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993013 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993021 4762 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993029 4762 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993069 4762 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993082 4762 feature_gate.go:330] unrecognized feature gate: Example Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993092 4762 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993101 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:58.993109 4762 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:58.993169 4762 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.009236 4762 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.009518 4762 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009675 4762 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009698 4762 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009710 4762 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009725 4762 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009742 4762 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009755 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009808 4762 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009822 4762 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009835 4762 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009850 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009861 4762 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009873 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009884 4762 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009895 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009906 4762 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009917 4762 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009927 4762 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009941 4762 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009954 4762 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009965 4762 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009977 4762 feature_gate.go:330] unrecognized feature gate: Example Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009989 4762 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.009999 4762 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010009 4762 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010019 4762 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010030 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010039 4762 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010049 4762 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010059 4762 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010070 4762 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010079 4762 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010090 4762 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010099 4762 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010112 4762 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010127 4762 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010139 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010150 4762 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010161 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010171 4762 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010182 4762 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010192 4762 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010201 4762 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010212 4762 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010222 4762 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010232 4762 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010243 4762 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010252 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010262 4762 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010272 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010282 4762 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010292 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010302 4762 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010312 4762 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010321 4762 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010331 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010342 4762 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010353 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010363 4762 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010373 4762 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010383 4762 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010393 4762 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010403 4762 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010413 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010424 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010434 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010444 4762 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010455 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010465 4762 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010476 4762 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010486 4762 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010498 4762 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.010515 4762 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010845 4762 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010872 4762 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010884 4762 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010896 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010909 4762 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010921 4762 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010931 4762 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010942 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010954 4762 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010965 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010975 4762 feature_gate.go:330] unrecognized feature gate: Example Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010986 4762 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.010996 4762 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011007 4762 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011017 4762 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011027 4762 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011037 4762 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011047 4762 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011056 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011067 4762 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011077 4762 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011087 4762 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011097 4762 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011108 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011119 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011129 4762 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011138 4762 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011148 4762 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011159 4762 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011170 4762 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011180 4762 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011189 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011199 4762 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011209 4762 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011221 4762 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011231 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011245 4762 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011258 4762 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011269 4762 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011280 4762 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011292 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011303 4762 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011316 4762 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011329 4762 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011340 4762 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011350 4762 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011360 4762 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011370 4762 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011381 4762 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011391 4762 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011401 4762 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011411 4762 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011421 4762 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011431 4762 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011441 4762 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011453 4762 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011463 4762 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011474 4762 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011483 4762 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011493 4762 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011504 4762 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011514 4762 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011524 4762 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011534 4762 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011582 4762 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011595 4762 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011605 4762 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011615 4762 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011629 4762 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011640 4762 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.011662 4762 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.011678 4762 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.012043 4762 server.go:940] "Client rotation is on, will bootstrap in background" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.018612 4762 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.028302 4762 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.028478 4762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.030940 4762 server.go:997] "Starting client certificate rotation" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.030981 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.031212 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.056538 4762 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.059426 4762 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.060169 4762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.083094 4762 log.go:25] "Validated CRI v1 runtime API" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.123889 4762 log.go:25] "Validated CRI v1 image API" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.126410 4762 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.133152 4762 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-08-00-18-28-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.133196 4762 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.161699 4762 manager.go:217] Machine: {Timestamp:2026-03-08 00:22:59.157549284 +0000 UTC m=+0.631693708 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:92130618-2066-4559-910d-c8073b27a95c BootID:70a459ab-aec5-4a81-84f3-03cc68c17eda Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:57:98:1f Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:57:98:1f Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:c3:de:8a Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3f:a1:75 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:99:ea:1b Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:d4:19:00 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:de:8c:a2 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:c2:80:c4:ab:47 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:ce:00:35:9d:55:8d Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.162102 4762 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.162340 4762 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.164865 4762 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.165219 4762 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.165272 4762 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.165620 4762 topology_manager.go:138] "Creating topology manager with none policy" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.165640 4762 container_manager_linux.go:303] "Creating device plugin manager" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.166340 4762 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.166382 4762 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.166749 4762 state_mem.go:36] "Initialized new in-memory state store" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.166904 4762 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.170851 4762 kubelet.go:418] "Attempting to sync node with API server" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.170887 4762 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.170927 4762 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.170949 4762 kubelet.go:324] "Adding apiserver pod source" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.170968 4762 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.177267 4762 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.178352 4762 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.178719 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.178720 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.178856 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.178880 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.181140 4762 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182817 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182856 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182871 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182885 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182907 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182921 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182935 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182956 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182973 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.182988 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.183030 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.183043 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.183949 4762 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.184661 4762 server.go:1280] "Started kubelet" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.185672 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.186492 4762 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.186510 4762 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 08 00:22:59 crc systemd[1]: Started Kubernetes Kubelet. Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.187581 4762 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.188313 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.188397 4762 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.188731 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.188748 4762 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.188828 4762 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.188838 4762 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.189466 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.189522 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.189576 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.190206 4762 server.go:460] "Adding debug handlers to kubelet server" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.191395 4762 factory.go:55] Registering systemd factory Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.192713 4762 factory.go:221] Registration of the systemd container factory successfully Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.197229 4762 factory.go:153] Registering CRI-O factory Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.197272 4762 factory.go:221] Registration of the crio container factory successfully Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.197365 4762 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.197397 4762 factory.go:103] Registering Raw factory Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.197433 4762 manager.go:1196] Started watching for new ooms in manager Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.202669 4762 manager.go:319] Starting recovery of all containers Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.204061 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.214966 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215084 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215107 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215128 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215146 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215163 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215180 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215201 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215226 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215246 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215296 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215314 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215333 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215354 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215372 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215389 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215406 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215423 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215443 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215461 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215478 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215496 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215515 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215534 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215586 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215616 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215647 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215678 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215701 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215725 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215749 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215833 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215859 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215877 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215930 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215950 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215967 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.215985 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216003 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216023 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216043 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216064 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216082 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216103 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216123 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216204 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216223 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216243 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216261 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216280 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216299 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216322 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216348 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216368 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216388 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216407 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216429 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216447 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216493 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216513 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216531 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216551 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216569 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216588 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216606 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216624 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216641 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216660 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216677 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216696 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216714 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216731 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216751 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216802 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216820 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216840 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216857 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216874 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216893 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216910 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216926 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216943 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216960 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.216983 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217000 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217018 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217038 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217056 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217077 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217095 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217115 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217134 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217153 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217171 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217197 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217214 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217231 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217250 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217268 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217286 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217307 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217323 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217342 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217360 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217388 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217409 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217429 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217450 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217469 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217490 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217511 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217532 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217551 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217572 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217594 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217616 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217634 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217652 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217669 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217690 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217709 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217726 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217745 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217790 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217809 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217829 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217847 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217866 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217883 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217901 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217921 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217940 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217958 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.217984 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218002 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218020 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218041 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218060 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218080 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218098 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218116 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218135 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218151 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218169 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218188 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218205 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218222 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218241 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218260 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218277 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218294 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218312 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218331 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218347 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218368 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.218388 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.220952 4762 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221007 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221034 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221058 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221083 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221103 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221122 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221142 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221162 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221180 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221198 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221216 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221239 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221258 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221276 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221327 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221347 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221366 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221384 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221402 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221421 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221446 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221473 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221497 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221519 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221542 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221570 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221592 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221611 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221629 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221649 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221672 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221692 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221710 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221728 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221747 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221795 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221814 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221832 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221851 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221869 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221889 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221908 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221926 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221944 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221964 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.221982 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222000 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222018 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222037 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222055 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222077 4762 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222099 4762 reconstruct.go:97] "Volume reconstruction finished" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.222113 4762 reconciler.go:26] "Reconciler: start to sync state" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.238503 4762 manager.go:324] Recovery completed Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.253510 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.255268 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.255321 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.255332 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.256147 4762 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.256166 4762 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.256192 4762 state_mem.go:36] "Initialized new in-memory state store" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.258952 4762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.261890 4762 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.261960 4762 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.262002 4762 kubelet.go:2335] "Starting kubelet main sync loop" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.262078 4762 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.265336 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.265406 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.273054 4762 policy_none.go:49] "None policy: Start" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.274020 4762 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.274155 4762 state_mem.go:35] "Initializing new in-memory state store" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.289850 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.335396 4762 manager.go:334] "Starting Device Plugin manager" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.335689 4762 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.335875 4762 server.go:79] "Starting device plugin registration server" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.336806 4762 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.336981 4762 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.337531 4762 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.337943 4762 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.338111 4762 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.346812 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.362957 4762 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.363142 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.364684 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.364747 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.364815 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.365104 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.365284 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.365361 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366490 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366532 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366572 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366589 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366538 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366675 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.366953 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.367074 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.367126 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368174 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368209 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368231 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368459 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368506 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368524 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368703 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368906 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.368970 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.369653 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.369699 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.369720 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.369894 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.370133 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.370204 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371066 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371085 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371111 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371170 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371135 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371236 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371392 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.371421 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372259 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372294 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372311 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372373 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372389 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.372397 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.390628 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424168 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424209 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424233 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424255 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424279 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424299 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424319 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424338 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424438 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424618 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424707 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.424822 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.438019 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.439986 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.440046 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.440064 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.440139 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.440988 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526679 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526752 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526856 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526896 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526928 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526962 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526972 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527054 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.526991 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527107 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527118 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527148 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527187 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527198 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527222 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527245 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527256 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527291 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527290 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527329 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527340 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527372 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527404 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527442 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527490 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527546 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527623 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527673 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527375 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.527407 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.641158 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.647262 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.647337 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.647353 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.647386 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.648017 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.709811 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.736947 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.746013 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fca3ae654b38d10c23b53538347d019b4b54eeb214fb4cf4d759a00b78d17ae4 WatchSource:0}: Error finding container fca3ae654b38d10c23b53538347d019b4b54eeb214fb4cf4d759a00b78d17ae4: Status 404 returned error can't find the container with id fca3ae654b38d10c23b53538347d019b4b54eeb214fb4cf4d759a00b78d17ae4 Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.759403 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.767196 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-80b92804a820417fc3ea015b81b550fa4c68c6995a0ee0ef5b910196d6ebf48b WatchSource:0}: Error finding container 80b92804a820417fc3ea015b81b550fa4c68c6995a0ee0ef5b910196d6ebf48b: Status 404 returned error can't find the container with id 80b92804a820417fc3ea015b81b550fa4c68c6995a0ee0ef5b910196d6ebf48b Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.777712 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3e835e4d5ef3c4566b8f41284212192deb9a84c17617d145f91f48fc1519dfc4 WatchSource:0}: Error finding container 3e835e4d5ef3c4566b8f41284212192deb9a84c17617d145f91f48fc1519dfc4: Status 404 returned error can't find the container with id 3e835e4d5ef3c4566b8f41284212192deb9a84c17617d145f91f48fc1519dfc4 Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.790216 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: E0308 00:22:59.791508 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Mar 08 00:22:59 crc kubenswrapper[4762]: I0308 00:22:59.801003 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.811312 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-8fbdceb127a0802f0d890a4d3613340458187c516fe1945bd7f3b61fe07b05ca WatchSource:0}: Error finding container 8fbdceb127a0802f0d890a4d3613340458187c516fe1945bd7f3b61fe07b05ca: Status 404 returned error can't find the container with id 8fbdceb127a0802f0d890a4d3613340458187c516fe1945bd7f3b61fe07b05ca Mar 08 00:22:59 crc kubenswrapper[4762]: W0308 00:22:59.822470 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-3cbd1f95bcebaffe19182d03ca98dd922bd147c94d48bf1630eaddde30f09b57 WatchSource:0}: Error finding container 3cbd1f95bcebaffe19182d03ca98dd922bd147c94d48bf1630eaddde30f09b57: Status 404 returned error can't find the container with id 3cbd1f95bcebaffe19182d03ca98dd922bd147c94d48bf1630eaddde30f09b57 Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.049040 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.051098 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.051216 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.051234 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.051281 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.052061 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Mar 08 00:23:00 crc kubenswrapper[4762]: W0308 00:23:00.131360 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.131451 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.186882 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.209020 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.266090 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fca3ae654b38d10c23b53538347d019b4b54eeb214fb4cf4d759a00b78d17ae4"} Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.267581 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3cbd1f95bcebaffe19182d03ca98dd922bd147c94d48bf1630eaddde30f09b57"} Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.269101 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fbdceb127a0802f0d890a4d3613340458187c516fe1945bd7f3b61fe07b05ca"} Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.271669 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e835e4d5ef3c4566b8f41284212192deb9a84c17617d145f91f48fc1519dfc4"} Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.272671 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"80b92804a820417fc3ea015b81b550fa4c68c6995a0ee0ef5b910196d6ebf48b"} Mar 08 00:23:00 crc kubenswrapper[4762]: W0308 00:23:00.303039 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.303142 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.592732 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Mar 08 00:23:00 crc kubenswrapper[4762]: W0308 00:23:00.594053 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.594152 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:00 crc kubenswrapper[4762]: W0308 00:23:00.746384 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.746537 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.853033 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.855383 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.855427 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.855440 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:00 crc kubenswrapper[4762]: I0308 00:23:00.855468 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:00 crc kubenswrapper[4762]: E0308 00:23:00.855939 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.087711 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:23:01 crc kubenswrapper[4762]: E0308 00:23:01.089163 4762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.187366 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.281397 4762 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="886aedadc77d591f29f4b9d40f25a3ed3b995c95ecf21fc7f0c471249bc8b609" exitCode=0 Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.281506 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"886aedadc77d591f29f4b9d40f25a3ed3b995c95ecf21fc7f0c471249bc8b609"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.281541 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.283095 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.283149 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.283168 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.285100 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"18130c148da5eceba8fbf2ff562d785c80b0707c2f4fe31add478c51e69d825a"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.285138 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d7dc43b94483b191269958f97bb05775c3ca2f45710b19afc37d73ea96f44ab0"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.287620 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b" exitCode=0 Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.287718 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.287819 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289338 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289843 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289918 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289670 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289620 4762 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080" exitCode=0 Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.289891 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.291252 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.291299 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.291316 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.292655 4762 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="74357138d162c507faaf210f3d3a8ce81e50e8b700cb24ffc4d59788dd499f8d" exitCode=0 Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.292690 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.292715 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.292742 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"74357138d162c507faaf210f3d3a8ce81e50e8b700cb24ffc4d59788dd499f8d"} Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294056 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294108 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294109 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294152 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:01 crc kubenswrapper[4762]: I0308 00:23:01.294128 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:01 crc kubenswrapper[4762]: W0308 00:23:01.736467 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:01 crc kubenswrapper[4762]: E0308 00:23:01.736598 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.196:6443: connect: connection refused" logger="UnhandledError" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.186539 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.196:6443: connect: connection refused Mar 08 00:23:02 crc kubenswrapper[4762]: E0308 00:23:02.194191 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.299581 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6fbd1064daf2610313d1abfbcb0d3fb7a4e8d2912d98064582ca49b0bf78e8a0"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.299628 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"660ccddae7d224d63f1b250267d667b2ab6c8a9988234a56a061cc416c0b534b"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.299640 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"386ccf0fd3fa0449e16e3593c95ba84bdb3918530f66e6a1dd3407bac7522130"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.299676 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.300829 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.300870 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.300890 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.302399 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.302390 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f3d22fe667d7944359b090357394cff4061a122a64989f918328de24f93c36a9"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.302537 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f383b4a945c8e18addd1ed2d19889fa080f5fbc162719d14b9ece2aadb0286df"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.303676 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.303745 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.303785 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.308672 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.308712 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.308726 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.308741 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.311025 4762 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49" exitCode=0 Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.311117 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.311248 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.312141 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.312171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.312182 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.312931 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ebac05294da08672b105f2ab6d9a0e355b10f0e546622b80976a12a5e52f9801"} Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.312987 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.313922 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.313949 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.313957 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.456065 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.457802 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.457844 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.457858 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:02 crc kubenswrapper[4762]: I0308 00:23:02.457888 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:02 crc kubenswrapper[4762]: E0308 00:23:02.458310 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.196:6443: connect: connection refused" node="crc" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.322592 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cec29163ea02357374a445fcea46f8f34cbec37eb1c2d44f50c9eed9bb506c58"} Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.322712 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.325343 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.325428 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.325455 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.328712 4762 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0" exitCode=0 Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.328844 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.328850 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0"} Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329076 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329156 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329424 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329573 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329964 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329989 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.329999 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330664 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330688 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330697 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330703 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330736 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.330752 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.331713 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.331738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:03 crc kubenswrapper[4762]: I0308 00:23:03.331754 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337736 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb"} Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337831 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2"} Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337858 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580"} Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337876 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b"} Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337838 4762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337900 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.337940 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339313 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339355 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339377 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339489 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339521 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.339541 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:04 crc kubenswrapper[4762]: I0308 00:23:04.630697 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.288468 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.288738 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.290099 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.290158 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.290177 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.321340 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.344538 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.344559 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242"} Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.344589 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345466 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345495 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345509 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345804 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345829 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.345840 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.659365 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.661889 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.661946 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.661964 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:05 crc kubenswrapper[4762]: I0308 00:23:05.662005 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.337651 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.347884 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.347920 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349631 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349674 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349715 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349737 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.349718 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.598514 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.598744 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.600288 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.600340 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.600358 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:06 crc kubenswrapper[4762]: I0308 00:23:06.605265 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.350714 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.352274 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.352337 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.352356 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.356285 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.356664 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.358701 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.358805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.358834 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:07 crc kubenswrapper[4762]: I0308 00:23:07.870307 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.275798 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.276037 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.277566 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.277615 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.277635 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.353620 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.355089 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.355165 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:08 crc kubenswrapper[4762]: I0308 00:23:08.355195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:09 crc kubenswrapper[4762]: I0308 00:23:09.192663 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:09 crc kubenswrapper[4762]: E0308 00:23:09.346939 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:09 crc kubenswrapper[4762]: I0308 00:23:09.356029 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:09 crc kubenswrapper[4762]: I0308 00:23:09.357073 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:09 crc kubenswrapper[4762]: I0308 00:23:09.357108 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:09 crc kubenswrapper[4762]: I0308 00:23:09.357120 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:10 crc kubenswrapper[4762]: I0308 00:23:10.870919 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:23:10 crc kubenswrapper[4762]: I0308 00:23:10.871035 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:23:12 crc kubenswrapper[4762]: W0308 00:23:12.903586 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z Mar 08 00:23:12 crc kubenswrapper[4762]: I0308 00:23:12.903690 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.903723 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:12 crc kubenswrapper[4762]: W0308 00:23:12.909952 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.910089 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:12 crc kubenswrapper[4762]: W0308 00:23:12.911950 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.912025 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.921614 4762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:12 crc kubenswrapper[4762]: W0308 00:23:12.924301 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.924411 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.925415 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.927398 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" node="crc" Mar 08 00:23:12 crc kubenswrapper[4762]: E0308 00:23:12.933616 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:12Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:12 crc kubenswrapper[4762]: I0308 00:23:12.934800 4762 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 08 00:23:12 crc kubenswrapper[4762]: I0308 00:23:12.934869 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 08 00:23:12 crc kubenswrapper[4762]: I0308 00:23:12.942607 4762 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 08 00:23:12 crc kubenswrapper[4762]: I0308 00:23:12.942655 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.189819 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:13Z is after 2026-02-23T05:33:13Z Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.369736 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.372410 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cec29163ea02357374a445fcea46f8f34cbec37eb1c2d44f50c9eed9bb506c58" exitCode=255 Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.372491 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cec29163ea02357374a445fcea46f8f34cbec37eb1c2d44f50c9eed9bb506c58"} Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.372726 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.374072 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.374285 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.374493 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:13 crc kubenswrapper[4762]: I0308 00:23:13.375448 4762 scope.go:117] "RemoveContainer" containerID="cec29163ea02357374a445fcea46f8f34cbec37eb1c2d44f50c9eed9bb506c58" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.190589 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:14Z is after 2026-02-23T05:33:13Z Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.376543 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.378687 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2"} Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.379002 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.380381 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.380452 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.380476 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:14 crc kubenswrapper[4762]: I0308 00:23:14.630947 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.046271 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.046500 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.048696 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.048811 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.048833 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.083202 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.192223 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:15Z is after 2026-02-23T05:33:13Z Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.295665 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.295927 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.297867 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.297932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.297949 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.384045 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.385173 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.388592 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" exitCode=255 Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.388676 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2"} Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.388803 4762 scope.go:117] "RemoveContainer" containerID="cec29163ea02357374a445fcea46f8f34cbec37eb1c2d44f50c9eed9bb506c58" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.388831 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.388848 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390513 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390545 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390604 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390565 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390648 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.390677 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.396622 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:15 crc kubenswrapper[4762]: E0308 00:23:15.398202 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:15 crc kubenswrapper[4762]: I0308 00:23:15.413541 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.190814 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:16Z is after 2026-02-23T05:33:13Z Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.395073 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.398890 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.399031 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400290 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400478 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400607 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400868 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400901 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.400919 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:16 crc kubenswrapper[4762]: I0308 00:23:16.402309 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:16 crc kubenswrapper[4762]: E0308 00:23:16.402609 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:16 crc kubenswrapper[4762]: W0308 00:23:16.511320 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:16Z is after 2026-02-23T05:33:13Z Mar 08 00:23:16 crc kubenswrapper[4762]: E0308 00:23:16.511412 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:16 crc kubenswrapper[4762]: W0308 00:23:16.867122 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:16Z is after 2026-02-23T05:33:13Z Mar 08 00:23:16 crc kubenswrapper[4762]: E0308 00:23:16.867269 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:16Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.190818 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:17Z is after 2026-02-23T05:33:13Z Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.365231 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.401602 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.404051 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.404103 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.404124 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.405048 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:17 crc kubenswrapper[4762]: E0308 00:23:17.405356 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:17 crc kubenswrapper[4762]: I0308 00:23:17.408960 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.192817 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:18Z is after 2026-02-23T05:33:13Z Mar 08 00:23:18 crc kubenswrapper[4762]: W0308 00:23:18.365742 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:18Z is after 2026-02-23T05:33:13Z Mar 08 00:23:18 crc kubenswrapper[4762]: E0308 00:23:18.365853 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:18Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.404232 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.405938 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.405998 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.406022 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:18 crc kubenswrapper[4762]: I0308 00:23:18.407097 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:18 crc kubenswrapper[4762]: E0308 00:23:18.407402 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.191504 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:19Z is after 2026-02-23T05:33:13Z Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.327540 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.329417 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.329482 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.329502 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:19 crc kubenswrapper[4762]: I0308 00:23:19.329543 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:19 crc kubenswrapper[4762]: E0308 00:23:19.330693 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:19Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 08 00:23:19 crc kubenswrapper[4762]: E0308 00:23:19.334320 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:19Z is after 2026-02-23T05:33:13Z" node="crc" Mar 08 00:23:19 crc kubenswrapper[4762]: E0308 00:23:19.347074 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.190458 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:20Z is after 2026-02-23T05:33:13Z Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.363279 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.363547 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.365698 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.365805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.365833 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.366814 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:20 crc kubenswrapper[4762]: E0308 00:23:20.367082 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.872018 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:23:20 crc kubenswrapper[4762]: I0308 00:23:20.872433 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:23:21 crc kubenswrapper[4762]: I0308 00:23:21.190047 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:21Z is after 2026-02-23T05:33:13Z Mar 08 00:23:21 crc kubenswrapper[4762]: I0308 00:23:21.319933 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:23:21 crc kubenswrapper[4762]: E0308 00:23:21.325450 4762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:21Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:22 crc kubenswrapper[4762]: I0308 00:23:22.192030 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:22Z is after 2026-02-23T05:33:13Z Mar 08 00:23:22 crc kubenswrapper[4762]: E0308 00:23:22.940052 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:22Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:23 crc kubenswrapper[4762]: I0308 00:23:23.189617 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:23Z is after 2026-02-23T05:33:13Z Mar 08 00:23:23 crc kubenswrapper[4762]: W0308 00:23:23.707672 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:23Z is after 2026-02-23T05:33:13Z Mar 08 00:23:23 crc kubenswrapper[4762]: E0308 00:23:23.707809 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:23Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:24 crc kubenswrapper[4762]: I0308 00:23:24.195323 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:24Z is after 2026-02-23T05:33:13Z Mar 08 00:23:25 crc kubenswrapper[4762]: W0308 00:23:25.020065 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:25Z is after 2026-02-23T05:33:13Z Mar 08 00:23:25 crc kubenswrapper[4762]: E0308 00:23:25.020174 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:25 crc kubenswrapper[4762]: W0308 00:23:25.140364 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:25Z is after 2026-02-23T05:33:13Z Mar 08 00:23:25 crc kubenswrapper[4762]: E0308 00:23:25.140496 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:25Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:25 crc kubenswrapper[4762]: I0308 00:23:25.191651 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:25Z is after 2026-02-23T05:33:13Z Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.189874 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:26Z is after 2026-02-23T05:33:13Z Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.334833 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.336319 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.336354 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.336363 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:26 crc kubenswrapper[4762]: I0308 00:23:26.336384 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:26 crc kubenswrapper[4762]: E0308 00:23:26.337600 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:26Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 08 00:23:26 crc kubenswrapper[4762]: E0308 00:23:26.340287 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:26Z is after 2026-02-23T05:33:13Z" node="crc" Mar 08 00:23:27 crc kubenswrapper[4762]: I0308 00:23:27.191651 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:27Z is after 2026-02-23T05:33:13Z Mar 08 00:23:27 crc kubenswrapper[4762]: W0308 00:23:27.796333 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:27Z is after 2026-02-23T05:33:13Z Mar 08 00:23:27 crc kubenswrapper[4762]: E0308 00:23:27.796888 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:27Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:28 crc kubenswrapper[4762]: I0308 00:23:28.192057 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:28Z is after 2026-02-23T05:33:13Z Mar 08 00:23:29 crc kubenswrapper[4762]: I0308 00:23:29.192022 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:29Z is after 2026-02-23T05:33:13Z Mar 08 00:23:29 crc kubenswrapper[4762]: E0308 00:23:29.347282 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.191537 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:30Z is after 2026-02-23T05:33:13Z Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.870733 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.870877 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.870964 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.871164 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.872399 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.872440 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.872449 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.872900 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"18130c148da5eceba8fbf2ff562d785c80b0707c2f4fe31add478c51e69d825a"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 08 00:23:30 crc kubenswrapper[4762]: I0308 00:23:30.873065 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://18130c148da5eceba8fbf2ff562d785c80b0707c2f4fe31add478c51e69d825a" gracePeriod=30 Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.190744 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:31Z is after 2026-02-23T05:33:13Z Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.262664 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.264333 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.264385 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.264403 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.265332 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.454323 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.455066 4762 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="18130c148da5eceba8fbf2ff562d785c80b0707c2f4fe31add478c51e69d825a" exitCode=255 Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.455156 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"18130c148da5eceba8fbf2ff562d785c80b0707c2f4fe31add478c51e69d825a"} Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.455219 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2526f3ad92693bcab80d9b610109060b4d9e70c746c48a9cfb755a86f01a2ead"} Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.455392 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.457210 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.457265 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:31 crc kubenswrapper[4762]: I0308 00:23:31.457284 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.189589 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:32Z is after 2026-02-23T05:33:13Z Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.460717 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.461712 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.464880 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" exitCode=255 Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.464936 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04"} Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.464984 4762 scope.go:117] "RemoveContainer" containerID="5f10fa79bb2dc8f149009a9287b96f6f6b117217a5ac5cb4ee353cf63dbd33c2" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.465188 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.466704 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.466786 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.466805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:32 crc kubenswrapper[4762]: I0308 00:23:32.467945 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:23:32 crc kubenswrapper[4762]: E0308 00:23:32.468383 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:32 crc kubenswrapper[4762]: E0308 00:23:32.947370 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:32Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.188232 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:33Z is after 2026-02-23T05:33:13Z Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.341481 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:33 crc kubenswrapper[4762]: E0308 00:23:33.342813 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:33Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.343623 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.343669 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.343684 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.343715 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:33 crc kubenswrapper[4762]: E0308 00:23:33.348474 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:33Z is after 2026-02-23T05:33:13Z" node="crc" Mar 08 00:23:33 crc kubenswrapper[4762]: I0308 00:23:33.469859 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.189572 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:34Z is after 2026-02-23T05:33:13Z Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.630795 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.631041 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.632723 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.632824 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.632850 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:34 crc kubenswrapper[4762]: I0308 00:23:34.633863 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:23:34 crc kubenswrapper[4762]: E0308 00:23:34.634213 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:35 crc kubenswrapper[4762]: I0308 00:23:35.195737 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:35Z is after 2026-02-23T05:33:13Z Mar 08 00:23:36 crc kubenswrapper[4762]: I0308 00:23:36.191302 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:36Z is after 2026-02-23T05:33:13Z Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.191076 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:37Z is after 2026-02-23T05:33:13Z Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.673438 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:23:37 crc kubenswrapper[4762]: E0308 00:23:37.679114 4762 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 08 00:23:37 crc kubenswrapper[4762]: E0308 00:23:37.680498 4762 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.871279 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.871973 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.873659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.873713 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:37 crc kubenswrapper[4762]: I0308 00:23:37.873730 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:38 crc kubenswrapper[4762]: I0308 00:23:38.192984 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:38Z is after 2026-02-23T05:33:13Z Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.192650 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:39Z is after 2026-02-23T05:33:13Z Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.192838 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.193072 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.194855 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.194916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:39 crc kubenswrapper[4762]: I0308 00:23:39.194936 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:39 crc kubenswrapper[4762]: E0308 00:23:39.347514 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.192229 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:40Z is after 2026-02-23T05:33:13Z Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.348951 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:40 crc kubenswrapper[4762]: E0308 00:23:40.349058 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:40Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.351804 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.351880 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.351900 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.351943 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:40 crc kubenswrapper[4762]: E0308 00:23:40.357341 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:40Z is after 2026-02-23T05:33:13Z" node="crc" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.362632 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.363265 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.365305 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.365414 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.365451 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.366435 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:23:40 crc kubenswrapper[4762]: E0308 00:23:40.366769 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.872166 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:23:40 crc kubenswrapper[4762]: I0308 00:23:40.872317 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:23:41 crc kubenswrapper[4762]: I0308 00:23:41.193868 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-08T00:23:41Z is after 2026-02-23T05:33:13Z Mar 08 00:23:42 crc kubenswrapper[4762]: I0308 00:23:42.196063 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.954681 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e082272d72 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,LastTimestamp:2026-03-08 00:22:59.184618866 +0000 UTC m=+0.658763250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.962401 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.972832 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.980177 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.987573 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e08b685649 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.339884105 +0000 UTC m=+0.814028489,LastTimestamp:2026-03-08 00:22:59.339884105 +0000 UTC m=+0.814028489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:42 crc kubenswrapper[4762]: E0308 00:23:42.995820 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.36472537 +0000 UTC m=+0.838869744,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.009723 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.364807144 +0000 UTC m=+0.838951518,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.018564 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.364828353 +0000 UTC m=+0.838972737,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.026447 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.366518781 +0000 UTC m=+0.840663155,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.031508 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.366559658 +0000 UTC m=+0.840704032,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.036197 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.366583196 +0000 UTC m=+0.840727580,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.042538 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.366598385 +0000 UTC m=+0.840742769,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.049619 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.366640372 +0000 UTC m=+0.840784756,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.054220 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.366692378 +0000 UTC m=+0.840836762,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.060615 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.368201431 +0000 UTC m=+0.842345815,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.066081 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.368221749 +0000 UTC m=+0.842366133,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.071967 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.368244937 +0000 UTC m=+0.842389321,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.077521 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.368488739 +0000 UTC m=+0.842633113,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.083881 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.368517907 +0000 UTC m=+0.842662291,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.091025 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.368533916 +0000 UTC m=+0.842678290,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.097099 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.369681064 +0000 UTC m=+0.843825438,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.102624 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e1cf6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e1cf6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25532799 +0000 UTC m=+0.729472344,LastTimestamp:2026-03-08 00:22:59.369710822 +0000 UTC m=+0.843855196,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.109922 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865e451e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865e451e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.25533827 +0000 UTC m=+0.729482624,LastTimestamp:2026-03-08 00:22:59.36972975 +0000 UTC m=+0.843874134,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.117143 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.371094093 +0000 UTC m=+0.845238477,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.124979 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189ab5e0865dcb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189ab5e0865dcb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.255307102 +0000 UTC m=+0.729451456,LastTimestamp:2026-03-08 00:22:59.371123221 +0000 UTC m=+0.845267595,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.134612 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e0a40f61a3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.753484707 +0000 UTC m=+1.227629051,LastTimestamp:2026-03-08 00:22:59.753484707 +0000 UTC m=+1.227629051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.142529 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e0a5091fcb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.769851851 +0000 UTC m=+1.243996195,LastTimestamp:2026-03-08 00:22:59.769851851 +0000 UTC m=+1.243996195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.147744 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e0a5abf99c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.780524444 +0000 UTC m=+1.254668818,LastTimestamp:2026-03-08 00:22:59.780524444 +0000 UTC m=+1.254668818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.154707 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0a7a94f8a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.813904266 +0000 UTC m=+1.288048650,LastTimestamp:2026-03-08 00:22:59.813904266 +0000 UTC m=+1.288048650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.160421 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e0a873d63f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:22:59.827177023 +0000 UTC m=+1.301321367,LastTimestamp:2026-03-08 00:22:59.827177023 +0000 UTC m=+1.301321367,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.166999 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e0de0d4787 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.726425479 +0000 UTC m=+2.200569863,LastTimestamp:2026-03-08 00:23:00.726425479 +0000 UTC m=+2.200569863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.172742 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0de0dad7c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.72645158 +0000 UTC m=+2.200595974,LastTimestamp:2026-03-08 00:23:00.72645158 +0000 UTC m=+2.200595974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: W0308 00:23:43.173117 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.173197 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.174983 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e0de12faad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.726799021 +0000 UTC m=+2.200943435,LastTimestamp:2026-03-08 00:23:00.726799021 +0000 UTC m=+2.200943435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.179124 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e0de188a46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.727163462 +0000 UTC m=+2.201307856,LastTimestamp:2026-03-08 00:23:00.727163462 +0000 UTC m=+2.201307856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.186058 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e0de273f6d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.728127341 +0000 UTC m=+2.202271725,LastTimestamp:2026-03-08 00:23:00.728127341 +0000 UTC m=+2.202271725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: I0308 00:23:43.192572 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.192871 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e0dff0f5f1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.758124017 +0000 UTC m=+2.232268371,LastTimestamp:2026-03-08 00:23:00.758124017 +0000 UTC m=+2.232268371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.199539 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e0dffb0ebf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.758785727 +0000 UTC m=+2.232930081,LastTimestamp:2026-03-08 00:23:00.758785727 +0000 UTC m=+2.232930081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.206664 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e0e00295f4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.759279092 +0000 UTC m=+2.233423476,LastTimestamp:2026-03-08 00:23:00.759279092 +0000 UTC m=+2.233423476,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.212719 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0e003d8f9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.759361785 +0000 UTC m=+2.233506139,LastTimestamp:2026-03-08 00:23:00.759361785 +0000 UTC m=+2.233506139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.218845 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e0e006b5f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.75954943 +0000 UTC m=+2.233693804,LastTimestamp:2026-03-08 00:23:00.75954943 +0000 UTC m=+2.233693804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.225894 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0e0189267 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.760719975 +0000 UTC m=+2.234864329,LastTimestamp:2026-03-08 00:23:00.760719975 +0000 UTC m=+2.234864329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.232973 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0f54b7675 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.116376693 +0000 UTC m=+2.590521077,LastTimestamp:2026-03-08 00:23:01.116376693 +0000 UTC m=+2.590521077,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.237613 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0f652be76 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.133631094 +0000 UTC m=+2.607775478,LastTimestamp:2026-03-08 00:23:01.133631094 +0000 UTC m=+2.607775478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.244665 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0f668f3d4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.135086548 +0000 UTC m=+2.609230942,LastTimestamp:2026-03-08 00:23:01.135086548 +0000 UTC m=+2.609230942,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.251274 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e0ff5e8df1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.285400049 +0000 UTC m=+2.759544403,LastTimestamp:2026-03-08 00:23:01.285400049 +0000 UTC m=+2.759544403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.258323 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e0ffc98ee8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.292412648 +0000 UTC m=+2.766557032,LastTimestamp:2026-03-08 00:23:01.292412648 +0000 UTC m=+2.766557032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.266032 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e10016f9f5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.297486325 +0000 UTC m=+2.771630709,LastTimestamp:2026-03-08 00:23:01.297486325 +0000 UTC m=+2.771630709,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.269645 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e10018a1af openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.297594799 +0000 UTC m=+2.771739183,LastTimestamp:2026-03-08 00:23:01.297594799 +0000 UTC m=+2.771739183,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.274548 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e1065ca9b0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.402716592 +0000 UTC m=+2.876860956,LastTimestamp:2026-03-08 00:23:01.402716592 +0000 UTC m=+2.876860956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.281896 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e107bcace0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.42578608 +0000 UTC m=+2.899930434,LastTimestamp:2026-03-08 00:23:01.42578608 +0000 UTC m=+2.899930434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.288429 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e107d1e853 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.427177555 +0000 UTC m=+2.901321909,LastTimestamp:2026-03-08 00:23:01.427177555 +0000 UTC m=+2.901321909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.296168 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e10e2922ab openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.533557419 +0000 UTC m=+3.007701763,LastTimestamp:2026-03-08 00:23:01.533557419 +0000 UTC m=+3.007701763,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.301658 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e10e538057 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.536333911 +0000 UTC m=+3.010478255,LastTimestamp:2026-03-08 00:23:01.536333911 +0000 UTC m=+3.010478255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.309177 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e10e538755 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.536335701 +0000 UTC m=+3.010480055,LastTimestamp:2026-03-08 00:23:01.536335701 +0000 UTC m=+3.010480055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.315723 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e10f2bab26 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.550500646 +0000 UTC m=+3.024644990,LastTimestamp:2026-03-08 00:23:01.550500646 +0000 UTC m=+3.024644990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.322238 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e10f2d487c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.55060646 +0000 UTC m=+3.024750804,LastTimestamp:2026-03-08 00:23:01.55060646 +0000 UTC m=+3.024750804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.329097 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e10f3c8c8b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.551606923 +0000 UTC m=+3.025751267,LastTimestamp:2026-03-08 00:23:01.551606923 +0000 UTC m=+3.025751267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.334310 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e10f42ca17 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.552015895 +0000 UTC m=+3.026160239,LastTimestamp:2026-03-08 00:23:01.552015895 +0000 UTC m=+3.026160239,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.339295 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e10f711c91 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.555051665 +0000 UTC m=+3.029196009,LastTimestamp:2026-03-08 00:23:01.555051665 +0000 UTC m=+3.029196009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.344830 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189ab5e110f4bbb8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.58045484 +0000 UTC m=+3.054599174,LastTimestamp:2026-03-08 00:23:01.58045484 +0000 UTC m=+3.054599174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.350106 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e115a75967 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.659269479 +0000 UTC m=+3.133413823,LastTimestamp:2026-03-08 00:23:01.659269479 +0000 UTC m=+3.133413823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.357206 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e116aabf55 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.676269397 +0000 UTC m=+3.150413741,LastTimestamp:2026-03-08 00:23:01.676269397 +0000 UTC m=+3.150413741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.362436 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e11a7f6bf4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.740538868 +0000 UTC m=+3.214683212,LastTimestamp:2026-03-08 00:23:01.740538868 +0000 UTC m=+3.214683212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.368207 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e11bd39ba8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.76283332 +0000 UTC m=+3.236977674,LastTimestamp:2026-03-08 00:23:01.76283332 +0000 UTC m=+3.236977674,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.375120 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e11be78414 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.764138004 +0000 UTC m=+3.238282368,LastTimestamp:2026-03-08 00:23:01.764138004 +0000 UTC m=+3.238282368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.380025 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e11bf603a4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.765088164 +0000 UTC m=+3.239232518,LastTimestamp:2026-03-08 00:23:01.765088164 +0000 UTC m=+3.239232518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.387012 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e11d833418 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.79111836 +0000 UTC m=+3.265262704,LastTimestamp:2026-03-08 00:23:01.79111836 +0000 UTC m=+3.265262704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.393943 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e11d9d87e2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.792843746 +0000 UTC m=+3.266988090,LastTimestamp:2026-03-08 00:23:01.792843746 +0000 UTC m=+3.266988090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.399042 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e126636b0d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.940030221 +0000 UTC m=+3.414174565,LastTimestamp:2026-03-08 00:23:01.940030221 +0000 UTC m=+3.414174565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.405329 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e12680c386 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.941953414 +0000 UTC m=+3.416097748,LastTimestamp:2026-03-08 00:23:01.941953414 +0000 UTC m=+3.416097748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.412542 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e12784b2ec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.958988524 +0000 UTC m=+3.433132878,LastTimestamp:2026-03-08 00:23:01.958988524 +0000 UTC m=+3.433132878,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.418500 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e127955f4e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.96008123 +0000 UTC m=+3.434225584,LastTimestamp:2026-03-08 00:23:01.96008123 +0000 UTC m=+3.434225584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.423864 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189ab5e12809ab2e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.96770283 +0000 UTC m=+3.441847174,LastTimestamp:2026-03-08 00:23:01.96770283 +0000 UTC m=+3.441847174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.431522 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e12e773d2a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.075546922 +0000 UTC m=+3.549691266,LastTimestamp:2026-03-08 00:23:02.075546922 +0000 UTC m=+3.549691266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.439237 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e1320952be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.13545235 +0000 UTC m=+3.609596694,LastTimestamp:2026-03-08 00:23:02.13545235 +0000 UTC m=+3.609596694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.446610 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e132bb487d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.147115133 +0000 UTC m=+3.621259477,LastTimestamp:2026-03-08 00:23:02.147115133 +0000 UTC m=+3.621259477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.454295 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e132cfcd79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.148459897 +0000 UTC m=+3.622604241,LastTimestamp:2026-03-08 00:23:02.148459897 +0000 UTC m=+3.622604241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.463296 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e13ca62f74 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.313504628 +0000 UTC m=+3.787648972,LastTimestamp:2026-03-08 00:23:02.313504628 +0000 UTC m=+3.787648972,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.471978 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e13e28af5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.33883427 +0000 UTC m=+3.812978614,LastTimestamp:2026-03-08 00:23:02.33883427 +0000 UTC m=+3.812978614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.479020 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e13ef2e2fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.352085756 +0000 UTC m=+3.826230110,LastTimestamp:2026-03-08 00:23:02.352085756 +0000 UTC m=+3.826230110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.485231 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e146dea5d7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.484977111 +0000 UTC m=+3.959121455,LastTimestamp:2026-03-08 00:23:02.484977111 +0000 UTC m=+3.959121455,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.491647 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e147a3241e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.497854494 +0000 UTC m=+3.971998838,LastTimestamp:2026-03-08 00:23:02.497854494 +0000 UTC m=+3.971998838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.501948 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e179526bdd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.331425245 +0000 UTC m=+4.805569629,LastTimestamp:2026-03-08 00:23:03.331425245 +0000 UTC m=+4.805569629,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.510035 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e185e1b9ae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.542143406 +0000 UTC m=+5.016287780,LastTimestamp:2026-03-08 00:23:03.542143406 +0000 UTC m=+5.016287780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.519149 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e186752ac9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.551806153 +0000 UTC m=+5.025950537,LastTimestamp:2026-03-08 00:23:03.551806153 +0000 UTC m=+5.025950537,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: W0308 00:23:43.532464 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.532394 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e186906f87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.553593223 +0000 UTC m=+5.027737577,LastTimestamp:2026-03-08 00:23:03.553593223 +0000 UTC m=+5.027737577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.532544 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.538402 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e192dfe20c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.760126476 +0000 UTC m=+5.234270860,LastTimestamp:2026-03-08 00:23:03.760126476 +0000 UTC m=+5.234270860,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.545441 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e193e9a587 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.777543559 +0000 UTC m=+5.251687943,LastTimestamp:2026-03-08 00:23:03.777543559 +0000 UTC m=+5.251687943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.552884 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e194006f77 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:03.779037047 +0000 UTC m=+5.253181401,LastTimestamp:2026-03-08 00:23:03.779037047 +0000 UTC m=+5.253181401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.559467 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1a2eeeaed openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.029547245 +0000 UTC m=+5.503691589,LastTimestamp:2026-03-08 00:23:04.029547245 +0000 UTC m=+5.503691589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.567199 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1a394345a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.040379482 +0000 UTC m=+5.514523836,LastTimestamp:2026-03-08 00:23:04.040379482 +0000 UTC m=+5.514523836,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.574484 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1a3a857ad openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.041699245 +0000 UTC m=+5.515843589,LastTimestamp:2026-03-08 00:23:04.041699245 +0000 UTC m=+5.515843589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.581628 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1b0fcb57a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.26533209 +0000 UTC m=+5.739476434,LastTimestamp:2026-03-08 00:23:04.26533209 +0000 UTC m=+5.739476434,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.592863 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1b241e7fd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.286644221 +0000 UTC m=+5.760788565,LastTimestamp:2026-03-08 00:23:04.286644221 +0000 UTC m=+5.760788565,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.601856 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1b25fdba9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.288607145 +0000 UTC m=+5.762751489,LastTimestamp:2026-03-08 00:23:04.288607145 +0000 UTC m=+5.762751489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.609979 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1bd4a5e69 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.471748201 +0000 UTC m=+5.945892545,LastTimestamp:2026-03-08 00:23:04.471748201 +0000 UTC m=+5.945892545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.615628 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189ab5e1be6c0607 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:04.490731015 +0000 UTC m=+5.964875399,LastTimestamp:2026-03-08 00:23:04.490731015 +0000 UTC m=+5.964875399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.627836 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-controller-manager-crc.189ab5e33ab73329 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 08 00:23:43 crc kubenswrapper[4762]: body: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:10.870999849 +0000 UTC m=+12.345144233,LastTimestamp:2026-03-08 00:23:10.870999849 +0000 UTC m=+12.345144233,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.633844 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e33ab86cb9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:10.871080121 +0000 UTC m=+12.345224495,LastTimestamp:2026-03-08 00:23:10.871080121 +0000 UTC m=+12.345224495,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.641615 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-apiserver-crc.189ab5e3b5bb0c1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 08 00:23:43 crc kubenswrapper[4762]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 08 00:23:43 crc kubenswrapper[4762]: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:12.934849565 +0000 UTC m=+14.408993949,LastTimestamp:2026-03-08 00:23:12.934849565 +0000 UTC m=+14.408993949,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.648959 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e3b5bbd356 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:12.934900566 +0000 UTC m=+14.409044940,LastTimestamp:2026-03-08 00:23:12.934900566 +0000 UTC m=+14.409044940,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.656127 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ab5e3b5bb0c1d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-apiserver-crc.189ab5e3b5bb0c1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 08 00:23:43 crc kubenswrapper[4762]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 08 00:23:43 crc kubenswrapper[4762]: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:12.934849565 +0000 UTC m=+14.408993949,LastTimestamp:2026-03-08 00:23:12.942641564 +0000 UTC m=+14.416785918,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.664414 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ab5e3b5bbd356\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e3b5bbd356 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:12.934900566 +0000 UTC m=+14.409044940,LastTimestamp:2026-03-08 00:23:12.942685266 +0000 UTC m=+14.416829620,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.671558 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ab5e132cfcd79\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e132cfcd79 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.148459897 +0000 UTC m=+3.622604241,LastTimestamp:2026-03-08 00:23:13.377577257 +0000 UTC m=+14.851721641,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.679013 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ab5e13e28af5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e13e28af5e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.33883427 +0000 UTC m=+3.812978614,LastTimestamp:2026-03-08 00:23:13.641808135 +0000 UTC m=+15.115952509,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.682278 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189ab5e13ef2e2fc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189ab5e13ef2e2fc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:02.352085756 +0000 UTC m=+3.826230110,LastTimestamp:2026-03-08 00:23:13.652238166 +0000 UTC m=+15.126382540,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.687536 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58ed861aa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 08 00:23:43 crc kubenswrapper[4762]: body: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872395178 +0000 UTC m=+22.346539592,LastTimestamp:2026-03-08 00:23:20.872395178 +0000 UTC m=+22.346539592,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.693975 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58edb8053 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872599635 +0000 UTC m=+22.346744019,LastTimestamp:2026-03-08 00:23:20.872599635 +0000 UTC m=+22.346744019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.701035 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e58ed861aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58ed861aa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 08 00:23:43 crc kubenswrapper[4762]: body: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872395178 +0000 UTC m=+22.346539592,LastTimestamp:2026-03-08 00:23:30.870837168 +0000 UTC m=+32.344981562,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.707926 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e58edb8053\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58edb8053 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872599635 +0000 UTC m=+22.346744019,LastTimestamp:2026-03-08 00:23:30.870921991 +0000 UTC m=+32.345066345,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.714918 4762 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e7e2ee4d82 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:30.873052546 +0000 UTC m=+32.347196890,LastTimestamp:2026-03-08 00:23:30.873052546 +0000 UTC m=+32.347196890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.722043 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e0e0189267\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0e0189267 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:00.760719975 +0000 UTC m=+2.234864329,LastTimestamp:2026-03-08 00:23:31.006380252 +0000 UTC m=+32.480524636,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.729231 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e0f54b7675\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0f54b7675 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.116376693 +0000 UTC m=+2.590521077,LastTimestamp:2026-03-08 00:23:31.220916929 +0000 UTC m=+32.695061283,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.738144 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e0f652be76\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e0f652be76 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:01.133631094 +0000 UTC m=+2.607775478,LastTimestamp:2026-03-08 00:23:31.233025621 +0000 UTC m=+32.707169975,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.748461 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e58ed861aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 08 00:23:43 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58ed861aa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 08 00:23:43 crc kubenswrapper[4762]: body: Mar 08 00:23:43 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872395178 +0000 UTC m=+22.346539592,LastTimestamp:2026-03-08 00:23:40.872240649 +0000 UTC m=+42.346385033,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:43 crc kubenswrapper[4762]: > Mar 08 00:23:43 crc kubenswrapper[4762]: E0308 00:23:43.755375 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e58edb8053\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58edb8053 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872599635 +0000 UTC m=+22.346744019,LastTimestamp:2026-03-08 00:23:40.872362113 +0000 UTC m=+42.346506497,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:23:44 crc kubenswrapper[4762]: I0308 00:23:44.194988 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:45 crc kubenswrapper[4762]: I0308 00:23:45.200263 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:46 crc kubenswrapper[4762]: I0308 00:23:46.195927 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.194118 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.357982 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:47 crc kubenswrapper[4762]: E0308 00:23:47.362038 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.364051 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.364134 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.364168 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:47 crc kubenswrapper[4762]: I0308 00:23:47.364241 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:47 crc kubenswrapper[4762]: E0308 00:23:47.371659 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 08 00:23:48 crc kubenswrapper[4762]: W0308 00:23:48.186498 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:48 crc kubenswrapper[4762]: E0308 00:23:48.186599 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 00:23:48 crc kubenswrapper[4762]: I0308 00:23:48.193377 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:49 crc kubenswrapper[4762]: I0308 00:23:49.196000 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:49 crc kubenswrapper[4762]: E0308 00:23:49.347694 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:50 crc kubenswrapper[4762]: I0308 00:23:50.192837 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:50 crc kubenswrapper[4762]: W0308 00:23:50.660067 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 08 00:23:50 crc kubenswrapper[4762]: E0308 00:23:50.660142 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 08 00:23:50 crc kubenswrapper[4762]: I0308 00:23:50.872294 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:23:50 crc kubenswrapper[4762]: I0308 00:23:50.872403 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:23:50 crc kubenswrapper[4762]: E0308 00:23:50.878184 4762 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189ab5e58ed861aa\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 08 00:23:50 crc kubenswrapper[4762]: &Event{ObjectMeta:{kube-controller-manager-crc.189ab5e58ed861aa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 08 00:23:50 crc kubenswrapper[4762]: body: Mar 08 00:23:50 crc kubenswrapper[4762]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:23:20.872395178 +0000 UTC m=+22.346539592,LastTimestamp:2026-03-08 00:23:50.872372879 +0000 UTC m=+52.346517273,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 08 00:23:50 crc kubenswrapper[4762]: > Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.193476 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.262915 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.264552 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.264611 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.264629 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.265486 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:23:51 crc kubenswrapper[4762]: E0308 00:23:51.265878 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.377196 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.377351 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.378745 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.378883 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:51 crc kubenswrapper[4762]: I0308 00:23:51.378906 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:52 crc kubenswrapper[4762]: I0308 00:23:52.192998 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:53 crc kubenswrapper[4762]: I0308 00:23:53.194218 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.193521 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:54 crc kubenswrapper[4762]: E0308 00:23:54.368331 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.372556 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.374248 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.374333 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.374364 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:54 crc kubenswrapper[4762]: I0308 00:23:54.374421 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:23:54 crc kubenswrapper[4762]: E0308 00:23:54.380512 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 08 00:23:55 crc kubenswrapper[4762]: I0308 00:23:55.194121 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:56 crc kubenswrapper[4762]: I0308 00:23:56.193871 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:57 crc kubenswrapper[4762]: I0308 00:23:57.193573 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:58 crc kubenswrapper[4762]: I0308 00:23:58.192848 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.194495 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:23:59 crc kubenswrapper[4762]: E0308 00:23:59.347959 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.634381 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.634664 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.636488 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.636560 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.636587 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:23:59 crc kubenswrapper[4762]: I0308 00:23:59.640580 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:24:00 crc kubenswrapper[4762]: I0308 00:24:00.191926 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:00 crc kubenswrapper[4762]: I0308 00:24:00.552576 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:00 crc kubenswrapper[4762]: I0308 00:24:00.554309 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:00 crc kubenswrapper[4762]: I0308 00:24:00.554355 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:00 crc kubenswrapper[4762]: I0308 00:24:00.554375 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.188891 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:01 crc kubenswrapper[4762]: E0308 00:24:01.374024 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.381164 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.382496 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.382542 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.382555 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:01 crc kubenswrapper[4762]: I0308 00:24:01.382587 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:24:01 crc kubenswrapper[4762]: E0308 00:24:01.388931 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 08 00:24:02 crc kubenswrapper[4762]: I0308 00:24:02.192921 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:03 crc kubenswrapper[4762]: I0308 00:24:03.192616 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:04 crc kubenswrapper[4762]: I0308 00:24:04.191015 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:05 crc kubenswrapper[4762]: I0308 00:24:05.191270 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.193268 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.263051 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.264152 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.264195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.264207 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.264686 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.568098 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.570734 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784"} Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.570891 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.571634 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.571689 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:06 crc kubenswrapper[4762]: I0308 00:24:06.571709 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.192218 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.576191 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.576822 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.578433 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" exitCode=255 Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.578490 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784"} Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.578539 4762 scope.go:117] "RemoveContainer" containerID="e2d7483ca237e22271a342bf1a6111e8d66e455c02ecb8e26c15e0e1990e6d04" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.578856 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.579928 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.579979 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.579998 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:07 crc kubenswrapper[4762]: I0308 00:24:07.580881 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:07 crc kubenswrapper[4762]: E0308 00:24:07.581193 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.192359 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:08 crc kubenswrapper[4762]: E0308 00:24:08.379924 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.389482 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.391235 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.391287 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.391312 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.391368 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:24:08 crc kubenswrapper[4762]: E0308 00:24:08.399004 4762 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 08 00:24:08 crc kubenswrapper[4762]: I0308 00:24:08.583322 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 08 00:24:09 crc kubenswrapper[4762]: I0308 00:24:09.190981 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:09 crc kubenswrapper[4762]: E0308 00:24:09.349091 4762 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 08 00:24:09 crc kubenswrapper[4762]: I0308 00:24:09.682900 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 08 00:24:09 crc kubenswrapper[4762]: I0308 00:24:09.702485 4762 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.191963 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.363115 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.363289 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.364361 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.364412 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.364592 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:10 crc kubenswrapper[4762]: I0308 00:24:10.370245 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:10 crc kubenswrapper[4762]: E0308 00:24:10.370552 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:11 crc kubenswrapper[4762]: I0308 00:24:11.190916 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:12 crc kubenswrapper[4762]: W0308 00:24:12.117983 4762 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 08 00:24:12 crc kubenswrapper[4762]: E0308 00:24:12.118072 4762 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 08 00:24:12 crc kubenswrapper[4762]: I0308 00:24:12.191375 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:13 crc kubenswrapper[4762]: I0308 00:24:13.193175 4762 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.144364 4762 csr.go:261] certificate signing request csr-x5p4c is approved, waiting to be issued Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.156351 4762 csr.go:257] certificate signing request csr-x5p4c is issued Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.214204 4762 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.263257 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.264912 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.264970 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.264992 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.631692 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.631940 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.633458 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.633526 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.633546 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:14 crc kubenswrapper[4762]: I0308 00:24:14.634927 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:14 crc kubenswrapper[4762]: E0308 00:24:14.635253 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.031680 4762 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.158360 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-04 13:03:26.661749061 +0000 UTC Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.158418 4762 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6516h39m11.503336894s for next certificate rotation Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.399354 4762 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.400995 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.401051 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.401069 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.401309 4762 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.411277 4762 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.411803 4762 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.411851 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.416248 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.416314 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.416338 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.416364 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.416384 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:15Z","lastTransitionTime":"2026-03-08T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.435556 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.445661 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.445721 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.445738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.445795 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.445815 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:15Z","lastTransitionTime":"2026-03-08T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.461165 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.471512 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.471562 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.471581 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.471605 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.471621 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:15Z","lastTransitionTime":"2026-03-08T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.485983 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.496426 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.496491 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.496516 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.496548 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.496573 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:15Z","lastTransitionTime":"2026-03-08T00:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.512303 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.512521 4762 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.512564 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.613236 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.713381 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.813829 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: E0308 00:24:15.914842 4762 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 08 00:24:15 crc kubenswrapper[4762]: I0308 00:24:15.955295 4762 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.018004 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.018080 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.018102 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.018135 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.018157 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.122084 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.122152 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.122172 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.122195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.122212 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.217127 4762 apiserver.go:52] "Watching apiserver" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.223012 4762 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.223356 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.223842 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.224025 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.224157 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.224105 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.224133 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.224449 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.224070 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.224877 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.225070 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.225410 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.225445 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.225462 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.225484 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.225500 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.229523 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.230510 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.230508 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.230683 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.230829 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.231163 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.231165 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.233062 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.233242 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.254973 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.277452 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.290217 4762 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.295611 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.305509 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.319831 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.327208 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.327470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.327649 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.327733 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328118 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328301 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328221 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328439 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328474 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328673 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.328605 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329033 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329090 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329138 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329182 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329264 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.329280 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330127 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330469 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330668 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330566 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330727 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330820 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330876 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330936 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.330990 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331042 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331090 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331144 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331203 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331252 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331303 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331356 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331403 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331686 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331805 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331865 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331913 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.331989 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332150 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332149 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332180 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332270 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332513 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332557 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332578 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332607 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332628 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332873 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332908 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332952 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332996 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333330 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333408 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333363 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333500 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333565 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333790 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333824 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333890 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.333930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.332039 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334078 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334136 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334179 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334185 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334237 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334289 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334337 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334384 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334429 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334476 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334023 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334364 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334481 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334523 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334648 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334705 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334735 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334809 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334861 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334876 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.334999 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335058 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335114 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335172 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335229 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335278 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335336 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335484 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335533 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335582 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335631 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335681 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335731 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335811 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335957 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336008 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336054 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336110 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336155 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336204 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336253 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336301 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336506 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336562 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336611 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336659 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336709 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336800 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336852 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336899 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336947 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336994 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337040 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337086 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337139 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337192 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337242 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337293 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337348 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337402 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337453 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337510 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337563 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337614 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337704 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337753 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337839 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337889 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337938 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337988 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338039 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338095 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338143 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335328 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.335815 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336014 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336028 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.336499 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.337288 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338134 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338193 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338416 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338704 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338801 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338857 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338963 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339259 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339386 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.338206 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339615 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339674 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339712 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339747 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339859 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339894 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339890 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339928 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339960 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339970 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.339995 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340029 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340062 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340096 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340127 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340160 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340195 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340228 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340263 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340296 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340331 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340367 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340405 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340437 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340505 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340504 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340513 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340538 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340640 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340703 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.340800 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341078 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341177 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341233 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341414 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341473 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341473 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341737 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.341812 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.341921 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:16.841877847 +0000 UTC m=+78.316022311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342071 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342133 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342191 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342247 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342273 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342306 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342361 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342416 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342525 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342590 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342646 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342704 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342748 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342793 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342845 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342852 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342896 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342905 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.342951 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343133 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343236 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343311 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343337 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343348 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343365 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343642 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343731 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343846 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343895 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343950 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344003 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344056 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344109 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344161 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344211 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344260 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344312 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344363 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344418 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344574 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344642 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344693 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344746 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344863 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344915 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345035 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345100 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345150 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345201 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345253 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345305 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345357 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345412 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345464 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345517 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345569 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345627 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345688 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345748 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345855 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345910 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345964 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346021 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346074 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346127 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346229 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346337 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346403 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346458 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346523 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346581 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346638 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346693 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346754 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346876 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346990 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347042 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347094 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347224 4762 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347318 4762 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347347 4762 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347375 4762 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347405 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347435 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347467 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347498 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347528 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347558 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347586 4762 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347641 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347671 4762 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347703 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347734 4762 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347861 4762 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347892 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347924 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347952 4762 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347980 4762 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348012 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348046 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348075 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348105 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348134 4762 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348165 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348195 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348224 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348251 4762 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348280 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348309 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348346 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348377 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348409 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348436 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348468 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348502 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348534 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348565 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348596 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348624 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348653 4762 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348684 4762 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348714 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348744 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348828 4762 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348860 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348893 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348921 4762 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348952 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348981 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349010 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349038 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349071 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349101 4762 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349132 4762 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349159 4762 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349187 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349216 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349246 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349915 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.343938 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344205 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344251 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344508 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344700 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.360474 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.360615 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344956 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345264 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345614 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345635 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.345706 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346559 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346738 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.346846 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.360734 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347285 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347400 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347409 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347461 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.347665 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348172 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.348749 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349234 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349350 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349602 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.349890 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.350171 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.350545 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.350904 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.351073 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.351929 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.352108 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.352662 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.352836 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.353601 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.353640 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.354620 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.355483 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.355492 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.355661 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.355655 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.355700 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356083 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356419 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356433 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356445 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361052 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356695 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356959 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357158 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357292 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361152 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361260 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361156 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357597 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357703 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358212 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358331 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358547 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358848 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358951 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.358995 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359017 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359151 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361512 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361809 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361916 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361935 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.361959 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.362002 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.362065 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.362116 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:16.862083687 +0000 UTC m=+78.336228101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.362406 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359770 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359902 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357371 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344921 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.356616 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.362490 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:16.862464769 +0000 UTC m=+78.336609153 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.366436 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.344732 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.365833 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.357316 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.359813 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.365919 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.367689 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.368958 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.369403 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.369558 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.369623 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.363670 4762 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.370176 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.379975 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.384659 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.385550 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.385596 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.386351 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.387267 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.387451 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.387482 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.387509 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.387638 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:16.887603225 +0000 UTC m=+78.361747639 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.388086 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.389348 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.391507 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.392821 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.394207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.396125 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.396165 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.396186 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.396256 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:16.896230609 +0000 UTC m=+78.370375083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.400293 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.400424 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.400738 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.400808 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.401818 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.402314 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.402796 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.403782 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.403748 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405361 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405399 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405522 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405658 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405938 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405948 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.405987 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.406099 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.406155 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.406189 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.406936 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.407385 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.407577 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.407793 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.408588 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.409336 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.414680 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.414786 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416151 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416247 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416299 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416328 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416368 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.416418 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.417213 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.417334 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.417562 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.417631 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.417952 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.418133 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.428597 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.430132 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.435142 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.435181 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.435194 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.435212 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.435225 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.447585 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.448714 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450078 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450133 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450208 4762 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450223 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450237 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450251 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450262 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450274 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450287 4762 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450298 4762 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450310 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450322 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450333 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450343 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450356 4762 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450367 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450379 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450393 4762 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450407 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450420 4762 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450432 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450445 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450456 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450468 4762 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450479 4762 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450490 4762 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450505 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450517 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450516 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450530 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450570 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450589 4762 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450604 4762 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450620 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450633 4762 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450647 4762 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450664 4762 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450679 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450693 4762 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450705 4762 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450718 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450731 4762 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450745 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450781 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450795 4762 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450809 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450822 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450835 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450846 4762 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450858 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450872 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450884 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450896 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450908 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450920 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450933 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450945 4762 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450957 4762 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450970 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450982 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.450995 4762 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451007 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451036 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451051 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451063 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451076 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451089 4762 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451102 4762 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451114 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451128 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451141 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451154 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451167 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451179 4762 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451191 4762 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451204 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451217 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451229 4762 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451242 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451254 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451266 4762 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451278 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451290 4762 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451301 4762 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451314 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451327 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451340 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451354 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451366 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451378 4762 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451390 4762 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451401 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451413 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451425 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451437 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451464 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451476 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451489 4762 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451501 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451512 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451524 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451535 4762 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451548 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451559 4762 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451572 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451583 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451596 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451608 4762 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451619 4762 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451630 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451642 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451654 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451667 4762 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451680 4762 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451691 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451703 4762 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451716 4762 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451729 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451741 4762 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451753 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451779 4762 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451790 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451802 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451814 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451826 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451838 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451849 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451861 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451874 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451886 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451897 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451909 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451922 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451935 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451947 4762 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451961 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.451972 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.538046 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.538107 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.538125 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.538152 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.538172 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.549347 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.566176 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.574342 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: source /etc/kubernetes/apiserver-url.env Mar 08 00:24:16 crc kubenswrapper[4762]: else Mar 08 00:24:16 crc kubenswrapper[4762]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 08 00:24:16 crc kubenswrapper[4762]: exit 1 Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.575529 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.579656 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 08 00:24:16 crc kubenswrapper[4762]: W0308 00:24:16.582745 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-b84669764b3bccb6f5c0c8ca0c5a30e48d7d374cf69d786f567c21efe0261724 WatchSource:0}: Error finding container b84669764b3bccb6f5c0c8ca0c5a30e48d7d374cf69d786f567c21efe0261724: Status 404 returned error can't find the container with id b84669764b3bccb6f5c0c8ca0c5a30e48d7d374cf69d786f567c21efe0261724 Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.586523 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:16 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 08 00:24:16 crc kubenswrapper[4762]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 08 00:24:16 crc kubenswrapper[4762]: ho_enable="--enable-hybrid-overlay" Mar 08 00:24:16 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 08 00:24:16 crc kubenswrapper[4762]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 08 00:24:16 crc kubenswrapper[4762]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-host=127.0.0.1 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-port=9743 \ Mar 08 00:24:16 crc kubenswrapper[4762]: ${ho_enable} \ Mar 08 00:24:16 crc kubenswrapper[4762]: --enable-interconnect \ Mar 08 00:24:16 crc kubenswrapper[4762]: --disable-approver \ Mar 08 00:24:16 crc kubenswrapper[4762]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --wait-for-kubernetes-api=200s \ Mar 08 00:24:16 crc kubenswrapper[4762]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.589464 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:16 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: Mar 08 00:24:16 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --disable-webhook \ Mar 08 00:24:16 crc kubenswrapper[4762]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.591193 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 08 00:24:16 crc kubenswrapper[4762]: W0308 00:24:16.594792 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-c1b439c6208f6ee4c5d86507c59609113b3d8794c99123f2805943826c3ec434 WatchSource:0}: Error finding container c1b439c6208f6ee4c5d86507c59609113b3d8794c99123f2805943826c3ec434: Status 404 returned error can't find the container with id c1b439c6208f6ee4c5d86507c59609113b3d8794c99123f2805943826c3ec434 Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.598125 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.599470 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.610319 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b84669764b3bccb6f5c0c8ca0c5a30e48d7d374cf69d786f567c21efe0261724"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.615801 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a236b4786c690826c0ac5b0fe77bf1a525713ce90c6f89a9bce6e3f74cb70d5d"} Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.618659 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:16 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 08 00:24:16 crc kubenswrapper[4762]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 08 00:24:16 crc kubenswrapper[4762]: ho_enable="--enable-hybrid-overlay" Mar 08 00:24:16 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 08 00:24:16 crc kubenswrapper[4762]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 08 00:24:16 crc kubenswrapper[4762]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-host=127.0.0.1 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --webhook-port=9743 \ Mar 08 00:24:16 crc kubenswrapper[4762]: ${ho_enable} \ Mar 08 00:24:16 crc kubenswrapper[4762]: --enable-interconnect \ Mar 08 00:24:16 crc kubenswrapper[4762]: --disable-approver \ Mar 08 00:24:16 crc kubenswrapper[4762]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --wait-for-kubernetes-api=200s \ Mar 08 00:24:16 crc kubenswrapper[4762]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.619014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c1b439c6208f6ee4c5d86507c59609113b3d8794c99123f2805943826c3ec434"} Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.619019 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: source /etc/kubernetes/apiserver-url.env Mar 08 00:24:16 crc kubenswrapper[4762]: else Mar 08 00:24:16 crc kubenswrapper[4762]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 08 00:24:16 crc kubenswrapper[4762]: exit 1 Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.621726 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.623060 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:16 crc kubenswrapper[4762]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:16 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:16 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:16 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:16 crc kubenswrapper[4762]: fi Mar 08 00:24:16 crc kubenswrapper[4762]: Mar 08 00:24:16 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 08 00:24:16 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:16 crc kubenswrapper[4762]: --disable-webhook \ Mar 08 00:24:16 crc kubenswrapper[4762]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 08 00:24:16 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:16 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:16 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.623396 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.624409 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.624499 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.636597 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.643694 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.643822 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.643851 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.643933 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.644006 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.651189 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.666710 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.680842 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.695230 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.710597 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.725382 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.739187 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.747874 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.747953 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.747974 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.748005 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.748026 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.752634 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.769282 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.782716 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.795096 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.850591 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.850657 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.850677 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.850756 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.850814 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.858137 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.858362 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:17.858318297 +0000 UTC m=+79.332462681 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.954178 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.954246 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.954265 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.954293 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.954316 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:16Z","lastTransitionTime":"2026-03-08T00:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.959536 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.959627 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.959681 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:16 crc kubenswrapper[4762]: I0308 00:24:16.959734 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959755 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959854 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959903 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:17.959871375 +0000 UTC m=+79.434015759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959947 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:17.959919046 +0000 UTC m=+79.434063430 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959957 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959996 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.959995 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.960024 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.960045 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.960069 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.960122 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:17.960098482 +0000 UTC m=+79.434242906 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:16 crc kubenswrapper[4762]: E0308 00:24:16.960167 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:17.960147443 +0000 UTC m=+79.434291947 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.057963 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.058087 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.058111 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.058143 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.058167 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.160853 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.160908 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.160926 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.160951 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.160969 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.262344 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.262535 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.264333 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.264435 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.264455 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.264517 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.264570 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.269434 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.270537 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.273012 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.274303 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.276150 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.277990 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.279573 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.281653 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.283260 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.285928 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.287019 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.289665 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.290721 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.292060 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.294214 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.295394 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.297535 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.298439 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.299685 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.301903 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.303087 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.305053 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.305970 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.308055 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.309041 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.310314 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.312518 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.313515 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.316307 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.317459 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.319525 4762 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.319819 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.323482 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.325404 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.326425 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.329990 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.331352 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.332441 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.333730 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.335108 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.336128 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.337360 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.338634 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.341417 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.342543 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.345038 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.347580 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.349459 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.350703 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.352499 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.353547 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.355320 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.356641 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.358115 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.368297 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.368522 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.368662 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.369050 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.369227 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.473261 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.473318 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.473336 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.473361 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.473379 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.576067 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.576111 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.576128 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.576151 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.576199 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.679668 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.679739 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.679793 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.679820 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.679846 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.785624 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.786267 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.786292 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.786326 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.786348 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.867792 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.868054 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:19.868010254 +0000 UTC m=+81.342154638 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.890220 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.890286 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.890303 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.890330 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.890362 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.969309 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.969393 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.969452 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.969507 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969609 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969611 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969662 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969691 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969692 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:19.969670154 +0000 UTC m=+81.443814528 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969826 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:19.969798338 +0000 UTC m=+81.443942712 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969859 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969928 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.969948 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.970013 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.970081 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:19.970051816 +0000 UTC m=+81.444196200 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:17 crc kubenswrapper[4762]: E0308 00:24:17.970130 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:19.970100877 +0000 UTC m=+81.444245261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.993942 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.994049 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.994069 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.994137 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:17 crc kubenswrapper[4762]: I0308 00:24:17.994155 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:17Z","lastTransitionTime":"2026-03-08T00:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.096825 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.096931 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.096948 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.096973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.096990 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.199886 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.199928 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.199938 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.199953 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.199964 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.263106 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:18 crc kubenswrapper[4762]: E0308 00:24:18.263237 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.263133 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:18 crc kubenswrapper[4762]: E0308 00:24:18.263385 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.302822 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.302939 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.302970 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.303004 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.303027 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.406159 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.406263 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.406283 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.406346 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.406365 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.509674 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.509801 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.509832 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.509868 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.509892 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.613160 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.613240 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.613265 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.613294 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.613316 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.717535 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.717593 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.717610 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.717634 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.717653 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.820813 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.820885 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.820908 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.820932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.820950 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.923683 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.923733 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.923750 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.923806 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:18 crc kubenswrapper[4762]: I0308 00:24:18.923823 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:18Z","lastTransitionTime":"2026-03-08T00:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.027202 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.027283 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.027304 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.027331 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.027350 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.130687 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.130820 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.130845 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.130874 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.130895 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.234999 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.235069 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.235086 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.235112 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.235131 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.262746 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.262974 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.283015 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.300602 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.315848 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.332036 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.338858 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.338945 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.338966 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.338993 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.339011 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.346829 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.363497 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.375553 4762 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.441981 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.442041 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.442058 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.442083 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.442118 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.545414 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.545481 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.545498 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.545525 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.545543 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.648029 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.648108 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.648135 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.648166 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.648184 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.751348 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.751427 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.751444 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.751472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.751490 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.854593 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.854670 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.854687 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.854712 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.854731 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.888384 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.888720 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:23.888680025 +0000 UTC m=+85.362824409 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.958091 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.958155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.958173 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.958200 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.958218 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:19Z","lastTransitionTime":"2026-03-08T00:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.989917 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.990011 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.990114 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:19 crc kubenswrapper[4762]: I0308 00:24:19.990174 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990213 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990255 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990280 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990295 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990352 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990388 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990423 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990452 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990421 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:23.990387526 +0000 UTC m=+85.464531900 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990553 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:23.990521751 +0000 UTC m=+85.464666175 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990586 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:23.990568412 +0000 UTC m=+85.464712896 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:19 crc kubenswrapper[4762]: E0308 00:24:19.990612 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:23.990599123 +0000 UTC m=+85.464743507 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.061724 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.061809 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.061828 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.061870 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.061897 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.165523 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.165577 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.165597 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.165624 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.165664 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.263308 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.263344 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:20 crc kubenswrapper[4762]: E0308 00:24:20.263511 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:20 crc kubenswrapper[4762]: E0308 00:24:20.263700 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.269049 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.269111 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.269140 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.269171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.269194 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.372420 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.372500 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.372523 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.372557 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.372581 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.475700 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.475801 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.475820 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.475852 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.475871 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.578426 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.578508 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.578526 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.578553 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.578571 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.681436 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.681534 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.681560 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.681594 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.681623 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.784538 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.784609 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.784628 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.784657 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.784676 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.888083 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.888136 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.888155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.888179 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.888197 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.991351 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.991407 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.991424 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.991451 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:20 crc kubenswrapper[4762]: I0308 00:24:20.991467 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:20Z","lastTransitionTime":"2026-03-08T00:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.094178 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.094257 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.094275 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.094301 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.094321 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.197401 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.197472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.197490 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.197514 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.197531 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.262997 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:21 crc kubenswrapper[4762]: E0308 00:24:21.263195 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.300400 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.300740 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.300916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.301058 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.301190 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.405722 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.406093 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.406283 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.406437 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.406580 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.510740 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.510845 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.510864 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.510891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.510908 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.614580 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.614656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.614682 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.614709 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.614728 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.718659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.718718 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.718735 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.718796 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.718815 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.823011 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.823079 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.823104 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.823131 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.823148 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.926394 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.926988 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.927230 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.927477 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:21 crc kubenswrapper[4762]: I0308 00:24:21.927685 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:21Z","lastTransitionTime":"2026-03-08T00:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.031376 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.031467 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.031486 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.031517 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.031536 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.135282 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.135360 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.135380 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.135409 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.135430 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.238535 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.238627 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.238651 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.238686 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.238708 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.263251 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.263311 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:22 crc kubenswrapper[4762]: E0308 00:24:22.263467 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:22 crc kubenswrapper[4762]: E0308 00:24:22.263645 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.342375 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.342440 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.342458 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.342488 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.342511 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.445992 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.446068 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.446093 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.446167 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.446193 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.548954 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.549013 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.549028 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.549048 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.549061 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.652190 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.652290 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.652313 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.652344 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.652366 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.755814 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.755898 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.755916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.755941 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.755959 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.859253 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.859340 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.859360 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.859384 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.859403 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.962350 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.962425 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.962449 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.962482 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:22 crc kubenswrapper[4762]: I0308 00:24:22.962506 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:22Z","lastTransitionTime":"2026-03-08T00:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.066278 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.066336 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.066353 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.066374 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.066389 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.169629 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.169695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.169708 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.169728 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.169742 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.263050 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:23 crc kubenswrapper[4762]: E0308 00:24:23.263326 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.273194 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.273303 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.273325 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.273353 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.273375 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.377253 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.377318 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.377335 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.377360 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.377379 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.481593 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.481664 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.481684 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.481713 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.481735 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.585692 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.585807 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.585828 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.585853 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.585874 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.689010 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.689084 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.689105 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.689142 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.689160 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.792889 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.792949 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.792967 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.792993 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.793010 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.896585 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.896679 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.896698 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.896724 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.896745 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:23Z","lastTransitionTime":"2026-03-08T00:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:23 crc kubenswrapper[4762]: I0308 00:24:23.930648 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:23 crc kubenswrapper[4762]: E0308 00:24:23.930921 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:31.930877366 +0000 UTC m=+93.405021740 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:23.999989 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.000044 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.000062 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.000089 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.000109 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.031816 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.031872 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.031916 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.031961 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032047 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032108 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032151 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:32.032123004 +0000 UTC m=+93.506267388 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032186 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:32.032163695 +0000 UTC m=+93.506308069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032221 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032258 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032260 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032324 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032279 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032347 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032405 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:32.032384652 +0000 UTC m=+93.506529026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.032439 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:32.032421334 +0000 UTC m=+93.506565718 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.103597 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.103656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.103674 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.103697 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.103714 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.207393 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.207442 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.207455 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.207474 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.207487 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.263333 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.263354 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.264211 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:24 crc kubenswrapper[4762]: E0308 00:24:24.264395 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.312185 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.312244 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.312269 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.312297 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.312314 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.415471 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.415573 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.415594 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.415626 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.415653 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.518743 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.518835 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.518849 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.518880 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.518900 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.622723 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.623139 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.623307 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.623466 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.623613 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.728089 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.728163 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.728177 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.728203 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.728221 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.831559 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.831629 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.831650 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.831680 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.831697 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.934493 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.934553 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.934568 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.934592 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:24 crc kubenswrapper[4762]: I0308 00:24:24.934606 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:24Z","lastTransitionTime":"2026-03-08T00:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.037845 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.037907 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.037925 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.037956 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.037976 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.140552 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.140624 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.140642 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.140669 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.140687 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.244539 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.244610 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.244634 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.244685 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.244707 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.262564 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.262830 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.347984 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.348054 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.348071 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.348099 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.348117 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.451864 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.451924 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.451944 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.451970 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.451988 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.561997 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.562088 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.562126 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.562163 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.562247 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.620805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.620846 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.620854 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.620868 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.620878 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.637669 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.643621 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.644027 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.644173 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.644320 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.644455 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.660706 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.666448 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.666524 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.666548 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.666591 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.666618 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.684120 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.689983 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.690060 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.690084 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.690119 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.690139 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.707731 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.714535 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.714599 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.714620 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.714646 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.714664 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.732290 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:25 crc kubenswrapper[4762]: E0308 00:24:25.732538 4762 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.735989 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.736081 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.736101 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.736155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.736177 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.839131 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.839210 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.839234 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.839268 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.839293 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.943829 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.943894 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.943912 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.943939 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:25 crc kubenswrapper[4762]: I0308 00:24:25.943958 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:25Z","lastTransitionTime":"2026-03-08T00:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.046396 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.046456 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.046473 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.046519 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.046537 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.149083 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.149131 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.149148 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.149174 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.149190 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.253156 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.253556 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.253722 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.253925 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.254067 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.262617 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:26 crc kubenswrapper[4762]: E0308 00:24:26.262741 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.262629 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:26 crc kubenswrapper[4762]: E0308 00:24:26.263201 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.356884 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.356961 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.356977 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.357002 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.357020 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.460241 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.460646 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.460843 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.461006 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.461145 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.563999 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.564063 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.564080 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.564106 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.564125 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.666438 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.666516 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.666538 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.666565 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:26 crc kubenswrapper[4762]: I0308 00:24:26.666586 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:26Z","lastTransitionTime":"2026-03-08T00:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.209990 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.210918 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.211075 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.211206 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.211327 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.263080 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:27 crc kubenswrapper[4762]: E0308 00:24:27.263283 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.314307 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.314361 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.314372 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.314389 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.314400 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.417010 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.417053 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.417066 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.417082 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.417093 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.519425 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.519461 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.519472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.519488 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.519498 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.622600 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.622630 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.622639 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.622652 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.622662 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.724477 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.724506 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.724516 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.724529 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.724538 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.826919 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.826949 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.826957 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.826970 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.826978 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.929974 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.930037 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.930055 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.930081 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:27 crc kubenswrapper[4762]: I0308 00:24:27.930098 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:27Z","lastTransitionTime":"2026-03-08T00:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.033591 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.033646 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.033659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.033679 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.033692 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.137254 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.137304 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.137313 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.137334 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.137346 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.241082 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.241170 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.241196 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.241230 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.241278 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.263215 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.263334 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.263216 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.263746 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.271583 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:28 crc kubenswrapper[4762]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 08 00:24:28 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:28 crc kubenswrapper[4762]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 08 00:24:28 crc kubenswrapper[4762]: source /etc/kubernetes/apiserver-url.env Mar 08 00:24:28 crc kubenswrapper[4762]: else Mar 08 00:24:28 crc kubenswrapper[4762]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 08 00:24:28 crc kubenswrapper[4762]: exit 1 Mar 08 00:24:28 crc kubenswrapper[4762]: fi Mar 08 00:24:28 crc kubenswrapper[4762]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 08 00:24:28 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:28 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.273143 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.287464 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.287751 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.288937 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.344499 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.344594 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.344625 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.344655 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.344678 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.448521 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.448585 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.448604 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.448632 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.448652 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.552478 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.552552 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.552574 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.552602 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.552621 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.654321 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:28 crc kubenswrapper[4762]: E0308 00:24:28.654608 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.655390 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.655456 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.655481 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.655509 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.655529 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.758944 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.759033 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.759051 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.759081 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.759103 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.862791 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.862872 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.862891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.862922 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.862946 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.966567 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.966625 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.966635 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.966656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:28 crc kubenswrapper[4762]: I0308 00:24:28.966670 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:28Z","lastTransitionTime":"2026-03-08T00:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.069999 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.070062 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.070081 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.070104 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.070119 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.173554 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.173636 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.173657 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.173689 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.173711 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.263285 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:29 crc kubenswrapper[4762]: E0308 00:24:29.263476 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.276574 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.276742 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.276865 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.276958 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.277057 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.283143 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.304280 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.321799 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.334998 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.345686 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.355512 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.372619 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.379552 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.379607 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.379626 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.379656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.379679 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.482585 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.482646 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.482664 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.482688 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.482705 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.585290 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.585331 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.585343 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.585360 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.585373 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.688111 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.688148 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.688158 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.688171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.688180 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.790830 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.790894 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.790914 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.790951 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.790969 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.894973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.895061 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.895090 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.895122 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.895144 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.998247 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.998311 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.998323 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.998349 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:29 crc kubenswrapper[4762]: I0308 00:24:29.998363 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:29Z","lastTransitionTime":"2026-03-08T00:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.110047 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.110134 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.110154 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.110186 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.110210 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.214136 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.214192 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.214206 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.214227 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.214240 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.262627 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.262677 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:30 crc kubenswrapper[4762]: E0308 00:24:30.263063 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:30 crc kubenswrapper[4762]: E0308 00:24:30.263201 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:30 crc kubenswrapper[4762]: E0308 00:24:30.265123 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 08 00:24:30 crc kubenswrapper[4762]: E0308 00:24:30.266477 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.317016 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.317094 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.317120 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.317151 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.317172 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.420837 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.420909 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.420927 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.420954 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.420977 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.524379 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.524447 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.524464 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.524489 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.524506 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.628902 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.628973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.628992 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.629018 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.629038 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.734948 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.735023 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.735041 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.735518 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.735576 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.838931 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.839469 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.839609 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.839744 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.839942 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.944093 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.944241 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.944266 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.944337 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:30 crc kubenswrapper[4762]: I0308 00:24:30.944368 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:30Z","lastTransitionTime":"2026-03-08T00:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.046808 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.047151 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.047218 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.047282 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.047354 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.150155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.150247 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.150275 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.150314 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.150340 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.253120 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.253217 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.253237 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.253260 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.253280 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.262810 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:31 crc kubenswrapper[4762]: E0308 00:24:31.262960 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:31 crc kubenswrapper[4762]: E0308 00:24:31.265652 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:31 crc kubenswrapper[4762]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:31 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:31 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:31 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:31 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:31 crc kubenswrapper[4762]: fi Mar 08 00:24:31 crc kubenswrapper[4762]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 08 00:24:31 crc kubenswrapper[4762]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 08 00:24:31 crc kubenswrapper[4762]: ho_enable="--enable-hybrid-overlay" Mar 08 00:24:31 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 08 00:24:31 crc kubenswrapper[4762]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 08 00:24:31 crc kubenswrapper[4762]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 08 00:24:31 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:31 crc kubenswrapper[4762]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 08 00:24:31 crc kubenswrapper[4762]: --webhook-host=127.0.0.1 \ Mar 08 00:24:31 crc kubenswrapper[4762]: --webhook-port=9743 \ Mar 08 00:24:31 crc kubenswrapper[4762]: ${ho_enable} \ Mar 08 00:24:31 crc kubenswrapper[4762]: --enable-interconnect \ Mar 08 00:24:31 crc kubenswrapper[4762]: --disable-approver \ Mar 08 00:24:31 crc kubenswrapper[4762]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 08 00:24:31 crc kubenswrapper[4762]: --wait-for-kubernetes-api=200s \ Mar 08 00:24:31 crc kubenswrapper[4762]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 08 00:24:31 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:31 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:31 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:31 crc kubenswrapper[4762]: E0308 00:24:31.268231 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:24:31 crc kubenswrapper[4762]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 08 00:24:31 crc kubenswrapper[4762]: if [[ -f "/env/_master" ]]; then Mar 08 00:24:31 crc kubenswrapper[4762]: set -o allexport Mar 08 00:24:31 crc kubenswrapper[4762]: source "/env/_master" Mar 08 00:24:31 crc kubenswrapper[4762]: set +o allexport Mar 08 00:24:31 crc kubenswrapper[4762]: fi Mar 08 00:24:31 crc kubenswrapper[4762]: Mar 08 00:24:31 crc kubenswrapper[4762]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 08 00:24:31 crc kubenswrapper[4762]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 08 00:24:31 crc kubenswrapper[4762]: --disable-webhook \ Mar 08 00:24:31 crc kubenswrapper[4762]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 08 00:24:31 crc kubenswrapper[4762]: --loglevel="${LOGLEVEL}" Mar 08 00:24:31 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 08 00:24:31 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:24:31 crc kubenswrapper[4762]: E0308 00:24:31.269658 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.357006 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.357125 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.357180 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.357207 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.357228 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.460341 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.460414 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.460434 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.460456 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.460473 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.563243 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.563296 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.563308 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.563328 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.563340 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.666059 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.666122 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.666139 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.666166 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.666185 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.768792 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.768856 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.768875 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.768899 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.768916 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.872320 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.872399 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.872418 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.872443 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.872461 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.954129 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:31 crc kubenswrapper[4762]: E0308 00:24:31.954484 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:47.954436609 +0000 UTC m=+109.428580963 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.975298 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.975350 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.975367 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.975390 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:31 crc kubenswrapper[4762]: I0308 00:24:31.975406 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:31Z","lastTransitionTime":"2026-03-08T00:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.055506 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.055624 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.055670 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.055685 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.055741 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.055818 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:48.055751629 +0000 UTC m=+109.529896013 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.055947 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056023 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:48.056000037 +0000 UTC m=+109.530144421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056153 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056184 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056208 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056266 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:48.056247504 +0000 UTC m=+109.530391878 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056569 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056598 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056615 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.056680 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:24:48.056664647 +0000 UTC m=+109.530809031 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.078605 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.078673 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.078699 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.078729 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.078750 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.181303 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.181344 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.181353 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.181367 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.181376 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.262332 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.262332 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.262542 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:32 crc kubenswrapper[4762]: E0308 00:24:32.262659 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.284555 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.284632 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.284656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.284687 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.284709 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.388003 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.388058 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.388071 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.388089 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.388101 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.491030 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.491101 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.491119 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.491151 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.491197 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.595018 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.595079 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.595098 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.595123 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.595147 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.698717 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.698806 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.698824 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.698847 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.698864 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.801919 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.801988 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.802007 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.802038 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.802065 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.904976 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.905042 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.905061 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.905085 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:32 crc kubenswrapper[4762]: I0308 00:24:32.905105 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:32Z","lastTransitionTime":"2026-03-08T00:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.007936 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.007994 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.008010 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.008037 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.008056 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.111605 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.111656 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.111669 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.111688 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.111701 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.214875 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.214953 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.214974 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.215011 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.215034 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.263019 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:33 crc kubenswrapper[4762]: E0308 00:24:33.263218 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.318314 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.318378 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.318396 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.318433 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.318458 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.421356 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.421835 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.422009 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.422155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.422291 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.524929 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.524994 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.525012 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.525046 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.525065 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.628844 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.628907 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.628921 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.628942 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.628955 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.732432 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.732572 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.732591 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.732617 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.732638 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.836613 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.836712 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.836730 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.836781 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.836800 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.940398 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.940472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.940491 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.940519 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:33 crc kubenswrapper[4762]: I0308 00:24:33.940537 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:33Z","lastTransitionTime":"2026-03-08T00:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.043740 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.043856 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.043880 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.043914 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.043937 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.147805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.147870 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.147890 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.147921 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.147944 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.251455 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.251515 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.251532 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.251559 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.251576 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.263071 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.263071 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:34 crc kubenswrapper[4762]: E0308 00:24:34.263305 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:34 crc kubenswrapper[4762]: E0308 00:24:34.263462 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.354529 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.355494 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.355662 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.355849 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.356000 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.460115 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.460223 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.460248 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.460281 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.460303 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.563749 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.563887 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.563913 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.563946 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.563970 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.666841 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.666938 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.666957 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.666985 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.667008 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.770356 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.770453 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.770471 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.770533 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.770554 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.873502 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.873568 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.873586 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.873612 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.873633 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.976632 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.976695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.976712 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.976738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:34 crc kubenswrapper[4762]: I0308 00:24:34.976787 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:34Z","lastTransitionTime":"2026-03-08T00:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.080059 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.080126 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.080144 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.080169 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.080186 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.183378 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.183436 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.183450 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.183473 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.183485 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.262621 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.262887 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.286722 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.286816 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.286834 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.286866 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.286884 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.390345 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.390413 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.390430 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.390452 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.390464 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.493134 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.493185 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.493195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.493217 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.493233 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.595645 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.595743 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.595781 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.595802 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.595814 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.698804 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.698872 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.698890 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.698916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.698934 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.788406 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.788477 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.788502 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.788531 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.788554 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.804750 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.809860 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.809916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.809934 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.809961 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.809978 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.820904 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.824928 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.824993 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.825005 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.825026 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.825038 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.839350 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.842786 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.842823 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.842833 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.842849 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.842859 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.852446 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.857973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.858030 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.858047 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.858072 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.858090 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.873524 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:35 crc kubenswrapper[4762]: E0308 00:24:35.873814 4762 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.875627 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.875681 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.875694 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.875715 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.875730 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.915041 4762 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.977915 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.978198 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.978262 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.978331 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:35 crc kubenswrapper[4762]: I0308 00:24:35.978400 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:35Z","lastTransitionTime":"2026-03-08T00:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.080523 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.080577 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.080586 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.080599 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.080607 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.184080 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.184149 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.184170 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.184197 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.184217 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.263265 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.263281 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:36 crc kubenswrapper[4762]: E0308 00:24:36.263944 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:36 crc kubenswrapper[4762]: E0308 00:24:36.264121 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.287460 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.287505 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.287521 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.287543 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.287559 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.391178 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.392115 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.392261 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.392405 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.392541 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.495745 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.495835 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.495861 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.495891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.495913 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.600434 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.600509 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.600533 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.600566 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.600593 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.704249 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.704298 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.704317 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.704342 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.704359 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.807816 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.807864 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.807946 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.807973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.808009 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.911030 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.911100 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.911117 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.911141 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:36 crc kubenswrapper[4762]: I0308 00:24:36.911155 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:36Z","lastTransitionTime":"2026-03-08T00:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.014733 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.014823 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.014845 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.014872 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.014894 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.117842 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.117893 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.117910 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.117934 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.117950 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.221870 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.221956 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.221981 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.222013 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.222035 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.262974 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:37 crc kubenswrapper[4762]: E0308 00:24:37.263453 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.325291 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.325352 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.325370 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.325395 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.325416 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.429465 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.429560 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.429590 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.429624 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.429648 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.533434 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.533503 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.533523 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.533553 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.533576 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.637009 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.637064 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.637083 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.637107 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.637123 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.739539 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.739604 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.739622 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.739647 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.739665 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.842884 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.843237 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.843322 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.843423 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.843529 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.946238 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.946329 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.946347 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.946370 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:37 crc kubenswrapper[4762]: I0308 00:24:37.946389 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:37Z","lastTransitionTime":"2026-03-08T00:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.049047 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.049090 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.049106 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.049127 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.049143 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.152169 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.152222 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.152245 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.152269 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.152285 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.255255 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.255313 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.255324 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.255345 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.255357 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.262704 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.262724 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:38 crc kubenswrapper[4762]: E0308 00:24:38.262837 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:38 crc kubenswrapper[4762]: E0308 00:24:38.262978 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.358849 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.358925 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.358950 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.358979 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.359002 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.461873 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.461933 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.461955 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.461981 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.461998 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.565465 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.565562 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.565586 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.565612 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.565968 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.668735 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.668856 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.668879 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.668903 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.668921 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.772163 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.772243 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.772262 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.772287 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.772306 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.875937 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.876021 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.876044 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.876078 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.876101 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.979451 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.979496 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.979507 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.979527 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:38 crc kubenswrapper[4762]: I0308 00:24:38.979538 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:38Z","lastTransitionTime":"2026-03-08T00:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.082642 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.082710 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.082729 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.082787 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.082813 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.185823 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.185891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.185911 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.185938 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.185957 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.262601 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:39 crc kubenswrapper[4762]: E0308 00:24:39.262890 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.264008 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:39 crc kubenswrapper[4762]: E0308 00:24:39.264328 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.281712 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.284428 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.288890 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.288988 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.289009 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.289035 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.289085 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.298101 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.312848 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.332975 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.350505 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.369546 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.385622 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.392162 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.392221 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.392241 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.392269 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.392288 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.495306 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.495374 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.495430 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.495465 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.495490 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.598722 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.598883 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.598905 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.598929 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.598946 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.702407 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.702533 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.702553 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.702612 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.702631 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.806408 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.806465 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.806485 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.806510 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.806527 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.911331 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.911428 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.911447 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.911472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:39 crc kubenswrapper[4762]: I0308 00:24:39.911490 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:39Z","lastTransitionTime":"2026-03-08T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.016146 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.016195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.016208 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.016228 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.016240 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.119794 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.119886 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.119906 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.119932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.119980 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.222893 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.222995 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.223449 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.223851 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.224209 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.263223 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.263235 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:40 crc kubenswrapper[4762]: E0308 00:24:40.263418 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:40 crc kubenswrapper[4762]: E0308 00:24:40.263667 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.329074 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.329161 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.329184 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.329210 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.329257 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.432670 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.432831 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.432853 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.432931 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.433863 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.537339 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.537382 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.537396 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.537417 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.537433 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.641074 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.641188 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.641212 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.641244 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.641285 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.744312 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.744347 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.744356 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.744370 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.744384 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.847263 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.847326 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.847345 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.847415 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.847438 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.950871 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.950939 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.950952 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.950978 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:40 crc kubenswrapper[4762]: I0308 00:24:40.950995 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:40Z","lastTransitionTime":"2026-03-08T00:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.054411 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.054483 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.054501 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.054532 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.054552 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.157443 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.157509 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.157521 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.157546 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.157568 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.260484 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.260546 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.260559 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.260584 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.260604 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.263014 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:41 crc kubenswrapper[4762]: E0308 00:24:41.263447 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.363836 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.363891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.363909 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.363934 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.363949 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.467844 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.467901 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.467914 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.467935 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.467948 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.571384 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.571448 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.571473 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.571499 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.571522 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.675570 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.675635 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.675655 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.675682 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.675707 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.779192 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.779258 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.779282 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.779311 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.779333 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.883227 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.883310 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.883335 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.883367 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.883390 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.986671 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.986836 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.986865 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.986891 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:41 crc kubenswrapper[4762]: I0308 00:24:41.986911 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:41Z","lastTransitionTime":"2026-03-08T00:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.090251 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.090352 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.090370 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.090415 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.090435 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.107545 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-px6h9"] Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.108695 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.111587 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.113262 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.113589 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.125682 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.144680 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.159125 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.173733 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.186500 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.193481 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.193513 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.193523 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.193542 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.193553 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.216994 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.234320 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.248691 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.258785 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40312eee-9bd9-4999-8bb9-b19f7c62671b-hosts-file\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.258872 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9lh8\" (UniqueName: \"kubernetes.io/projected/40312eee-9bd9-4999-8bb9-b19f7c62671b-kube-api-access-l9lh8\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.262669 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.262733 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:42 crc kubenswrapper[4762]: E0308 00:24:42.262850 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:42 crc kubenswrapper[4762]: E0308 00:24:42.262956 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.265607 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.296693 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.296733 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.296795 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.296814 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.296825 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.360116 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40312eee-9bd9-4999-8bb9-b19f7c62671b-hosts-file\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.360182 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9lh8\" (UniqueName: \"kubernetes.io/projected/40312eee-9bd9-4999-8bb9-b19f7c62671b-kube-api-access-l9lh8\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.360425 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/40312eee-9bd9-4999-8bb9-b19f7c62671b-hosts-file\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.400695 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9lh8\" (UniqueName: \"kubernetes.io/projected/40312eee-9bd9-4999-8bb9-b19f7c62671b-kube-api-access-l9lh8\") pod \"node-resolver-px6h9\" (UID: \"40312eee-9bd9-4999-8bb9-b19f7c62671b\") " pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.400936 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.401161 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.401317 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.401448 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.401601 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.433184 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-px6h9" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.501544 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-bx2x4"] Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.502678 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-c4plq"] Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.503129 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.504103 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.504387 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xglnk"] Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.505602 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.508908 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.508965 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.508992 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509029 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509053 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509224 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509569 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509667 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.509922 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.510368 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.510596 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.510874 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.511012 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.511152 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.511616 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.511894 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.512741 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.542477 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.556657 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.561943 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-conf-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562094 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5e384d81-de01-4ab9-b10b-2c9c5b45422c-rootfs\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562238 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-os-release\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562368 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-system-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562515 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562736 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwcb2\" (UniqueName: \"kubernetes.io/projected/5e384d81-de01-4ab9-b10b-2c9c5b45422c-kube-api-access-pwcb2\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562827 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562868 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-daemon-config\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562904 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-etc-kubernetes\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562937 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-kubelet\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.562974 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-os-release\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563010 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-socket-dir-parent\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563045 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e384d81-de01-4ab9-b10b-2c9c5b45422c-proxy-tls\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563078 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563109 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-cnibin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563155 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563187 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-multus-certs\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563235 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-cni-binary-copy\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563268 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-cnibin\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563299 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-k8s-cni-cncf-io\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563343 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-system-cni-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563390 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grm74\" (UniqueName: \"kubernetes.io/projected/7d2d9680-4c98-4b01-a49b-90766e2331b9-kube-api-access-grm74\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563430 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-hostroot\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563487 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-bin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563529 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-multus\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563569 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e384d81-de01-4ab9-b10b-2c9c5b45422c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563608 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-netns\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.563653 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rd6l\" (UniqueName: \"kubernetes.io/projected/c82b8767-5225-48de-aa6f-4668a0c01fcc-kube-api-access-2rd6l\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.571491 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.589449 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.602284 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.612023 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.612103 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.612122 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.612170 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.612188 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.617794 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.632318 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.657570 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664653 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-daemon-config\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664707 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-etc-kubernetes\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664738 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-kubelet\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664796 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e384d81-de01-4ab9-b10b-2c9c5b45422c-proxy-tls\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664845 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664878 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-cnibin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664912 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-os-release\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664941 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-socket-dir-parent\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.664971 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-multus-certs\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665019 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665059 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-cni-binary-copy\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665085 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-cnibin\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665113 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-k8s-cni-cncf-io\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665148 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-system-cni-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665186 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grm74\" (UniqueName: \"kubernetes.io/projected/7d2d9680-4c98-4b01-a49b-90766e2331b9-kube-api-access-grm74\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665216 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-hostroot\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665287 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-bin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665324 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-multus\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665362 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e384d81-de01-4ab9-b10b-2c9c5b45422c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665394 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-netns\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rd6l\" (UniqueName: \"kubernetes.io/projected/c82b8767-5225-48de-aa6f-4668a0c01fcc-kube-api-access-2rd6l\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665459 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5e384d81-de01-4ab9-b10b-2c9c5b45422c-rootfs\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665490 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-os-release\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665521 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-system-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665556 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-conf-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665592 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665643 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwcb2\" (UniqueName: \"kubernetes.io/projected/5e384d81-de01-4ab9-b10b-2c9c5b45422c-kube-api-access-pwcb2\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.665676 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.666640 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667158 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-os-release\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667204 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-etc-kubernetes\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667289 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667314 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-system-cni-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667404 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-multus-certs\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667421 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-socket-dir-parent\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667487 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5e384d81-de01-4ab9-b10b-2c9c5b45422c-rootfs\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667498 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-daemon-config\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667535 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-os-release\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667554 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-kubelet\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667576 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-multus-conf-dir\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667640 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-cnibin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667699 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-bin\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667793 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-system-cni-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667838 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-hostroot\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.667801 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7d2d9680-4c98-4b01-a49b-90766e2331b9-cni-binary-copy\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668040 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-netns\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668114 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-var-lib-cni-multus\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668189 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-cnibin\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668202 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c82b8767-5225-48de-aa6f-4668a0c01fcc-host-run-k8s-cni-cncf-io\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7d2d9680-4c98-4b01-a49b-90766e2331b9-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668525 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c82b8767-5225-48de-aa6f-4668a0c01fcc-cni-binary-copy\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.668545 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5e384d81-de01-4ab9-b10b-2c9c5b45422c-mcd-auth-proxy-config\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.672612 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.675930 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5e384d81-de01-4ab9-b10b-2c9c5b45422c-proxy-tls\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.687439 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.689587 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwcb2\" (UniqueName: \"kubernetes.io/projected/5e384d81-de01-4ab9-b10b-2c9c5b45422c-kube-api-access-pwcb2\") pod \"machine-config-daemon-bx2x4\" (UID: \"5e384d81-de01-4ab9-b10b-2c9c5b45422c\") " pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.698567 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-px6h9" event={"ID":"40312eee-9bd9-4999-8bb9-b19f7c62671b","Type":"ContainerStarted","Data":"32f10ea808dc933495d9b45b4e78c7773cc7e2bfc47a9daa2a7ed29af0a9a629"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.699917 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rd6l\" (UniqueName: \"kubernetes.io/projected/c82b8767-5225-48de-aa6f-4668a0c01fcc-kube-api-access-2rd6l\") pod \"multus-c4plq\" (UID: \"c82b8767-5225-48de-aa6f-4668a0c01fcc\") " pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.703382 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.703700 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grm74\" (UniqueName: \"kubernetes.io/projected/7d2d9680-4c98-4b01-a49b-90766e2331b9-kube-api-access-grm74\") pod \"multus-additional-cni-plugins-xglnk\" (UID: \"7d2d9680-4c98-4b01-a49b-90766e2331b9\") " pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.713924 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.713966 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.713979 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.713997 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.714009 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.716258 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.727835 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.738248 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.747970 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.756406 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.768453 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.786690 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.796309 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.809161 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.817052 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.817151 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.817172 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.817198 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.817220 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.819959 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.834085 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-c4plq" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.834258 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.850595 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:24:42 crc kubenswrapper[4762]: W0308 00:24:42.852093 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc82b8767_5225_48de_aa6f_4668a0c01fcc.slice/crio-060ef5a57ec1e132abc971a3150e7e5becf4719c044f8447998b1dfb08a2d201 WatchSource:0}: Error finding container 060ef5a57ec1e132abc971a3150e7e5becf4719c044f8447998b1dfb08a2d201: Status 404 returned error can't find the container with id 060ef5a57ec1e132abc971a3150e7e5becf4719c044f8447998b1dfb08a2d201 Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.858714 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xglnk" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.865972 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hfbrb"] Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.867319 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.870191 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.870610 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.871911 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.871981 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.872091 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.873125 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.873521 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.888010 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.902929 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.919841 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.919895 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.919914 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.919939 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.919957 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:42Z","lastTransitionTime":"2026-03-08T00:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.924300 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.940421 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.955207 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.969152 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.969961 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.970248 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.970477 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.970702 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.970992 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.971247 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.971475 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.971745 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.972163 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.972405 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.972663 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxppj\" (UniqueName: \"kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.973012 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.973258 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.973525 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.973793 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.974059 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.974293 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.974509 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.974750 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.975032 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.981215 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:42 crc kubenswrapper[4762]: I0308 00:24:42.993182 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.003741 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.013246 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023039 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023110 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023135 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023162 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023179 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.023399 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.034202 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.053774 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076698 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076746 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076828 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076858 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076869 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.076897 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077014 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077049 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077094 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077118 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077143 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077174 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077200 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077221 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077249 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077281 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077325 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077348 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxppj\" (UniqueName: \"kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077500 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077518 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077530 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077553 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077555 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077561 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077557 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077581 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077528 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077617 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077630 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077638 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.077594 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.078440 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.079064 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.079248 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.081339 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.099190 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxppj\" (UniqueName: \"kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj\") pod \"ovnkube-node-hfbrb\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.126902 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.126959 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.126971 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.127008 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.127020 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.191156 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:43 crc kubenswrapper[4762]: W0308 00:24:43.208138 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c6764d8_a35c_4d3f_8b38_1cec1782d9bf.slice/crio-89864bf0641f265ad45d3cfec592d05f16033d6dd4f549ff580761f55902ee4f WatchSource:0}: Error finding container 89864bf0641f265ad45d3cfec592d05f16033d6dd4f549ff580761f55902ee4f: Status 404 returned error can't find the container with id 89864bf0641f265ad45d3cfec592d05f16033d6dd4f549ff580761f55902ee4f Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.233484 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.233559 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.233575 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.233681 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.233697 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.262505 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:43 crc kubenswrapper[4762]: E0308 00:24:43.262752 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.336742 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.336912 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.336945 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.336978 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.337004 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.441792 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.441859 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.441878 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.441905 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.441924 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.545401 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.545927 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.545957 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.545992 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.546016 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.649527 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.649623 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.649648 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.649681 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.649706 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.703179 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.705416 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623" exitCode=0 Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.705482 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.705506 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerStarted","Data":"def0cfe1b99ae698869f0fde74f5399efabdda8ac9691af60d32b3e16bc0acef"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.708258 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.708345 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.708368 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"ecd369eefe3605564a4c91efc876208b9d66f9dd08636b629dd5d975de38cb29"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.713342 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c4plq" event={"ID":"c82b8767-5225-48de-aa6f-4668a0c01fcc","Type":"ContainerStarted","Data":"91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.713426 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c4plq" event={"ID":"c82b8767-5225-48de-aa6f-4668a0c01fcc","Type":"ContainerStarted","Data":"060ef5a57ec1e132abc971a3150e7e5becf4719c044f8447998b1dfb08a2d201"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.717160 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-px6h9" event={"ID":"40312eee-9bd9-4999-8bb9-b19f7c62671b","Type":"ContainerStarted","Data":"f27831e91d4c784cbe5613ff51c0f955284de1c5b29885418f91651a50ade288"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.720614 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" exitCode=0 Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.720670 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.720703 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"89864bf0641f265ad45d3cfec592d05f16033d6dd4f549ff580761f55902ee4f"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.731909 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.753290 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.753341 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.753355 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.753378 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.753392 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.769194 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.785428 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.800362 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.812678 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.832885 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.843870 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.856291 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.859116 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.859166 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.859180 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.859199 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.859216 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.872796 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.882949 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.894201 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.907785 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.918597 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.929725 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.944136 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.955998 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.962670 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.962807 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.962824 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.962916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.962936 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:43Z","lastTransitionTime":"2026-03-08T00:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.966483 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.976825 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:43 crc kubenswrapper[4762]: I0308 00:24:43.989455 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.002453 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27831e91d4c784cbe5613ff51c0f955284de1c5b29885418f91651a50ade288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.012819 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.030952 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.048139 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.060287 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.066296 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.066327 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.066338 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.066354 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.066365 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.072671 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.084159 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.169803 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.170304 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.170314 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.170335 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.170347 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.262355 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.262378 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:44 crc kubenswrapper[4762]: E0308 00:24:44.262623 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:44 crc kubenswrapper[4762]: E0308 00:24:44.262688 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.272844 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.272898 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.272919 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.272947 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.272965 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.376463 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.376533 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.376558 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.376589 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.376614 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.479923 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.480015 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.480073 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.480101 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.480114 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.583219 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.583261 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.583273 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.583289 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.583299 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.690613 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.691185 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.691203 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.691228 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.691246 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.734305 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerStarted","Data":"ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.776860 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.776927 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.776948 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.782691 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.793944 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.793974 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.793986 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.794003 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.794016 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.802266 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.816894 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.827578 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.841210 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.854733 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.864124 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27831e91d4c784cbe5613ff51c0f955284de1c5b29885418f91651a50ade288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.876257 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.894340 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.896805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.896852 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.896862 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.896883 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.896897 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.913907 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.929356 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.950291 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.981847 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.999864 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.999897 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:44 crc kubenswrapper[4762]: I0308 00:24:44.999914 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:44.999931 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:44.999946 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:44Z","lastTransitionTime":"2026-03-08T00:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.101953 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.101997 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.102008 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.102023 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.102036 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.204687 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.205137 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.205249 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.205362 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.205440 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.263252 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:45 crc kubenswrapper[4762]: E0308 00:24:45.263383 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.307806 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.307844 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.307856 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.307872 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.307884 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.410566 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.410967 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.411071 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.411207 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.411303 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.514158 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.514402 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.514467 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.514531 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.514602 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.617381 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.617659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.617765 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.617900 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.617997 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.721437 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.722619 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.722658 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.722685 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.722703 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.795534 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.795636 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.795656 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.798870 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd" exitCode=0 Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.798953 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.804623 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"248b4bcdf13661d7f313f8b292d840dfe9539bd8d3fdf4eef31d68b3d80a0d30"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.824000 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.825122 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.825189 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.825202 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.825219 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.825230 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.835523 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27831e91d4c784cbe5613ff51c0f955284de1c5b29885418f91651a50ade288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.856986 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.870695 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.883649 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.900834 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.919820 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.929711 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.929742 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.929751 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.929790 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.929802 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.933093 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.944548 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.958026 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.967131 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.980331 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.983002 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.983028 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.983038 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.983054 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.983068 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:45Z","lastTransitionTime":"2026-03-08T00:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:45 crc kubenswrapper[4762]: I0308 00:24:45.998597 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.008319 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.012045 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f11fe626-2001-4b03-8751-2498c02e9969\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-08T00:24:07Z\\\",\\\"message\\\":\\\"le observer\\\\nW0308 00:24:06.836532 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0308 00:24:06.836678 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0308 00:24:06.837491 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2787369083/tls.crt::/tmp/serving-cert-2787369083/tls.key\\\\\\\"\\\\nI0308 00:24:07.111979 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0308 00:24:07.116315 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0308 00:24:07.116334 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0308 00:24:07.116359 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0308 00:24:07.116366 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0308 00:24:07.122520 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0308 00:24:07.122542 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0308 00:24:07.122566 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0308 00:24:07.122587 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0308 00:24:07.122593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0308 00:24:07.122599 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0308 00:24:07.122605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0308 00:24:07.124179 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:06Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.016040 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.016091 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.016101 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.016118 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.016129 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.020913 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-px6h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"40312eee-9bd9-4999-8bb9-b19f7c62671b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27831e91d4c784cbe5613ff51c0f955284de1c5b29885418f91651a50ade288\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l9lh8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-px6h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.028729 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.030484 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://248b4bcdf13661d7f313f8b292d840dfe9539bd8d3fdf4eef31d68b3d80a0d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.033421 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.033448 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.033460 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.033477 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.033490 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.043298 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.045332 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.046358 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.046379 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.046391 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.046404 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.046414 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.058675 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.062166 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.062195 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.062204 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.062218 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.062227 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.070310 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"70a459ab-aec5-4a81-84f3-03cc68c17eda\\\",\\\"systemUUID\\\":\\\"92130618-2066-4559-910d-c8073b27a95c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.070418 4762 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.072845 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.072873 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.072882 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.072896 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.072908 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.073214 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.087858 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.101416 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.114287 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.127835 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.145521 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.170485 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.177882 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.177919 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.177931 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.177947 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.177958 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.187219 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.198877 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.262529 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.262907 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.263296 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:46 crc kubenswrapper[4762]: E0308 00:24:46.263527 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.281932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.281984 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.282002 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.282029 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.282047 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.385327 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.385376 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.385394 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.385433 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.385449 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.487493 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.487528 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.487538 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.487553 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.487562 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.593759 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.593821 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.593837 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.593854 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.593866 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.696695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.697029 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.697143 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.697251 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.697334 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.800648 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.800704 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.800721 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.800746 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.800767 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.811130 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"860fb791e4380085a6aa337232e3ba6f47cc09b3663d226d29ac7bdd65661c03"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.811199 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1f6764e4d63ad145846e461d2e896dc10c2724124d48233bf7c3de68a2417ef7"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.815192 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="38c3a83e1ca859c57ba3a1d797078873f1d2ab94301d25e71db0c480b320d082" exitCode=0 Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.815283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"38c3a83e1ca859c57ba3a1d797078873f1d2ab94301d25e71db0c480b320d082"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.826460 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://248b4bcdf13661d7f313f8b292d840dfe9539bd8d3fdf4eef31d68b3d80a0d30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.844205 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-c4plq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c82b8767-5225-48de-aa6f-4668a0c01fcc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2rd6l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-c4plq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.878327 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88b6c031-246c-4083-9abd-b7a907fce711\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:22:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6e82b1d21398de30a1313583757107986ea8218a412db94198a7b65edf7b580\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99b0fad6cee96b801aec34455f6485f25221c40b655c26dfbf7e7c2c548618e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://68595f5de1a1079d9fb869521151ac1e0c37f24ea45029cd2ee44d63252685eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://13951f78d098c0736e7125a2bb1aac981c81285615c42d5f5548e9c29b28f242\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://316ca2d63b5ba7595979d24bf20c2803b73bdff2b589d7af2e17d2d4810b121b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:23:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c64fa869621273b3d04cd91b66e61e69eb1bc219505812ddfba118f5a5157080\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://76e4db72078af648098ae8e289739bc413b6c812f18f28e7f39b0b9644bc8e49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:01Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dc1dbd1374cec9dd1215fbbb421a2f55163ebd6594497a895c2f7579c0e0cca0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:23:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:23:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:22:59Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.895886 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://683bcc908802b30c59fdf7fcfe628e98feafe7916759cba3da92e274552a99d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.904540 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.904599 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.904619 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.904645 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.904665 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:46Z","lastTransitionTime":"2026-03-08T00:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.905919 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.916588 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.926025 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e384d81-de01-4ab9-b10b-2c9c5b45422c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1fe200867c42cdebd7cc5148326937c97c4fbeb6de5bb4aec90b3639ed803d1d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pwcb2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-bx2x4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.938152 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d2d9680-4c98-4b01-a49b-90766e2331b9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c24c96bd122812835e33f36e9ef954574ec8210bec7738aedb7c1b481a10623\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ddf8ac585c82b92c49c4748a95ae05c99ce38a6a67a50a0506197c2a995cfbcd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-grm74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xglnk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.956035 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-08T00:24:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-08T00:24:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxppj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-08T00:24:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hfbrb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:46 crc kubenswrapper[4762]: I0308 00:24:46.971611 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T00:24:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://860fb791e4380085a6aa337232e3ba6f47cc09b3663d226d29ac7bdd65661c03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1f6764e4d63ad145846e461d2e896dc10c2724124d48233bf7c3de68a2417ef7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:24:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.008104 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.008141 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.008152 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.008171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.008182 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.047729 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-px6h9" podStartSLOduration=47.047704441 podStartE2EDuration="47.047704441s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:47.047703171 +0000 UTC m=+108.521847555" watchObservedRunningTime="2026-03-08 00:24:47.047704441 +0000 UTC m=+108.521848825" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.118005 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podStartSLOduration=47.117984707 podStartE2EDuration="47.117984707s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:47.069230153 +0000 UTC m=+108.543374537" watchObservedRunningTime="2026-03-08 00:24:47.117984707 +0000 UTC m=+108.592129051" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.120695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.120738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.120750 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.120803 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.120814 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.206833 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-c4plq" podStartSLOduration=47.206818511 podStartE2EDuration="47.206818511s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:47.206577074 +0000 UTC m=+108.680721458" watchObservedRunningTime="2026-03-08 00:24:47.206818511 +0000 UTC m=+108.680962855" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.222649 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.222683 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.222695 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.222710 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.222719 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.231272 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=8.231255535 podStartE2EDuration="8.231255535s" podCreationTimestamp="2026-03-08 00:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:47.230650936 +0000 UTC m=+108.704795320" watchObservedRunningTime="2026-03-08 00:24:47.231255535 +0000 UTC m=+108.705399869" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.265085 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:47 crc kubenswrapper[4762]: E0308 00:24:47.265240 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.325404 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.325449 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.325461 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.325478 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.325498 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.328269 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lj8bd"] Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.328587 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.330531 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.331956 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.332372 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.341763 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.421123 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0a5e837-1f5f-445f-8dba-e9fce0de2972-serviceca\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.421546 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a5e837-1f5f-445f-8dba-e9fce0de2972-host\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.421637 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7whz\" (UniqueName: \"kubernetes.io/projected/f0a5e837-1f5f-445f-8dba-e9fce0de2972-kube-api-access-d7whz\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.427752 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.427796 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.427808 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.427821 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.427830 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.490922 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr"] Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.491337 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.493318 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.493965 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.506387 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-gdnwf"] Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.506757 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: E0308 00:24:47.506828 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gdnwf" podUID="a6d5f4b4-a877-45da-9fed-81885011430f" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.522196 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a5e837-1f5f-445f-8dba-e9fce0de2972-host\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.522285 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7whz\" (UniqueName: \"kubernetes.io/projected/f0a5e837-1f5f-445f-8dba-e9fce0de2972-kube-api-access-d7whz\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.522343 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f0a5e837-1f5f-445f-8dba-e9fce0de2972-host\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.522363 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0a5e837-1f5f-445f-8dba-e9fce0de2972-serviceca\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.523567 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f0a5e837-1f5f-445f-8dba-e9fce0de2972-serviceca\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.531423 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.531466 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.531480 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.531498 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.531510 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.543328 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7whz\" (UniqueName: \"kubernetes.io/projected/f0a5e837-1f5f-445f-8dba-e9fce0de2972-kube-api-access-d7whz\") pod \"node-ca-lj8bd\" (UID: \"f0a5e837-1f5f-445f-8dba-e9fce0de2972\") " pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623509 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623564 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623633 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623719 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mz8\" (UniqueName: \"kubernetes.io/projected/1011a8b6-4256-49d6-9b29-8c2c6b79072b-kube-api-access-44mz8\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623857 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.623912 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7jw\" (UniqueName: \"kubernetes.io/projected/a6d5f4b4-a877-45da-9fed-81885011430f-kube-api-access-nx7jw\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.635487 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.635540 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.635558 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.635582 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.635599 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.640964 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lj8bd" Mar 08 00:24:47 crc kubenswrapper[4762]: W0308 00:24:47.661461 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0a5e837_1f5f_445f_8dba_e9fce0de2972.slice/crio-f5c4bffdaf3f84827ddc027536e677a3f46f56830ff40ea0b53f4eb2deb7a994 WatchSource:0}: Error finding container f5c4bffdaf3f84827ddc027536e677a3f46f56830ff40ea0b53f4eb2deb7a994: Status 404 returned error can't find the container with id f5c4bffdaf3f84827ddc027536e677a3f46f56830ff40ea0b53f4eb2deb7a994 Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724463 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724510 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx7jw\" (UniqueName: \"kubernetes.io/projected/a6d5f4b4-a877-45da-9fed-81885011430f-kube-api-access-nx7jw\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724552 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724600 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724646 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.724682 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mz8\" (UniqueName: \"kubernetes.io/projected/1011a8b6-4256-49d6-9b29-8c2c6b79072b-kube-api-access-44mz8\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: E0308 00:24:47.724708 4762 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:47 crc kubenswrapper[4762]: E0308 00:24:47.724863 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs podName:a6d5f4b4-a877-45da-9fed-81885011430f nodeName:}" failed. No retries permitted until 2026-03-08 00:24:48.224830881 +0000 UTC m=+109.698975315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs") pod "network-metrics-daemon-gdnwf" (UID: "a6d5f4b4-a877-45da-9fed-81885011430f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.726409 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.726417 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1011a8b6-4256-49d6-9b29-8c2c6b79072b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.730819 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1011a8b6-4256-49d6-9b29-8c2c6b79072b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.737984 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.738038 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.738073 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.738100 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.738119 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.745134 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mz8\" (UniqueName: \"kubernetes.io/projected/1011a8b6-4256-49d6-9b29-8c2c6b79072b-kube-api-access-44mz8\") pod \"ovnkube-control-plane-749d76644c-8r2tr\" (UID: \"1011a8b6-4256-49d6-9b29-8c2c6b79072b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.753409 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx7jw\" (UniqueName: \"kubernetes.io/projected/a6d5f4b4-a877-45da-9fed-81885011430f-kube-api-access-nx7jw\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.803475 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" Mar 08 00:24:47 crc kubenswrapper[4762]: W0308 00:24:47.823673 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1011a8b6_4256_49d6_9b29_8c2c6b79072b.slice/crio-c6fc02390269ee805e13f1857345108222c1e8ab2742f5cc4b797708b20c5ad0 WatchSource:0}: Error finding container c6fc02390269ee805e13f1857345108222c1e8ab2742f5cc4b797708b20c5ad0: Status 404 returned error can't find the container with id c6fc02390269ee805e13f1857345108222c1e8ab2742f5cc4b797708b20c5ad0 Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.829542 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.830627 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lj8bd" event={"ID":"f0a5e837-1f5f-445f-8dba-e9fce0de2972","Type":"ContainerStarted","Data":"f5c4bffdaf3f84827ddc027536e677a3f46f56830ff40ea0b53f4eb2deb7a994"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.833952 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="ad7e8e90d662879aaa35d1e68f99ede4ad5c3cd67d08770187113ceeaa4f9628" exitCode=0 Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.833987 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"ad7e8e90d662879aaa35d1e68f99ede4ad5c3cd67d08770187113ceeaa4f9628"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.842124 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.842168 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.842182 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.842209 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.842222 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.944699 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.944726 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.944738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.944753 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:47 crc kubenswrapper[4762]: I0308 00:24:47.944768 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:47Z","lastTransitionTime":"2026-03-08T00:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.027406 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.027559 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:20.027529811 +0000 UTC m=+141.501674185 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.049633 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.049684 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.049700 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.049723 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.049741 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.128481 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.128530 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.128551 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.128575 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.128690 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.128703 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.128714 4762 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.128752 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-08 00:25:20.128740417 +0000 UTC m=+141.602884751 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129135 4762 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129178 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:25:20.1291696 +0000 UTC m=+141.603313934 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129217 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129241 4762 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129250 4762 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129267 4762 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129272 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-08 00:25:20.129265633 +0000 UTC m=+141.603409977 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.129312 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-08 00:25:20.129305924 +0000 UTC m=+141.603450268 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.152608 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.152638 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.152647 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.152659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.152668 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.229656 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.229880 4762 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.229963 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs podName:a6d5f4b4-a877-45da-9fed-81885011430f nodeName:}" failed. No retries permitted until 2026-03-08 00:24:49.229940632 +0000 UTC m=+110.704085016 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs") pod "network-metrics-daemon-gdnwf" (UID: "a6d5f4b4-a877-45da-9fed-81885011430f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.255455 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.255492 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.255501 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.255516 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.255526 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.263118 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.263125 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.263265 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:48 crc kubenswrapper[4762]: E0308 00:24:48.263304 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.359068 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.359124 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.359146 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.359171 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.359189 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.466965 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.467374 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.467385 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.467403 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.467415 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.570991 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.571055 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.571081 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.571114 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.571147 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.673899 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.673959 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.673976 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.674004 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.674021 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.777640 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.777733 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.777764 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.777825 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.777855 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.840926 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lj8bd" event={"ID":"f0a5e837-1f5f-445f-8dba-e9fce0de2972","Type":"ContainerStarted","Data":"be49ebaaa6bbedb9daba25cbfb25cc85a8457e4dc1efedc82f10d2206d435d1f"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.843590 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" event={"ID":"1011a8b6-4256-49d6-9b29-8c2c6b79072b","Type":"ContainerStarted","Data":"40569f8dacf66a210f75b7c46fd6ef83951bb5ababb5211aa3b1ef9fb561aebe"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.843664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" event={"ID":"1011a8b6-4256-49d6-9b29-8c2c6b79072b","Type":"ContainerStarted","Data":"5b432f1b1c4b57c2f78608ad05a2216456fcd3fbee20611d3a4a1e36317fc949"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.843687 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" event={"ID":"1011a8b6-4256-49d6-9b29-8c2c6b79072b","Type":"ContainerStarted","Data":"c6fc02390269ee805e13f1857345108222c1e8ab2742f5cc4b797708b20c5ad0"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.847530 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="6fc9c88ab78abe4fbc24fd077bd75e1d8b688bcbaaaa745739bb1a40e5e16035" exitCode=0 Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.847585 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"6fc9c88ab78abe4fbc24fd077bd75e1d8b688bcbaaaa745739bb1a40e5e16035"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.880592 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.880635 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.880652 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.880676 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.880692 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.901221 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lj8bd" podStartSLOduration=48.901181496 podStartE2EDuration="48.901181496s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:48.862267264 +0000 UTC m=+110.336411638" watchObservedRunningTime="2026-03-08 00:24:48.901181496 +0000 UTC m=+110.375325970" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.917668 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8r2tr" podStartSLOduration=47.917652128 podStartE2EDuration="47.917652128s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:48.917452222 +0000 UTC m=+110.391596586" watchObservedRunningTime="2026-03-08 00:24:48.917652128 +0000 UTC m=+110.391796462" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.991166 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.991248 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.991262 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.991285 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:48 crc kubenswrapper[4762]: I0308 00:24:48.991297 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:48Z","lastTransitionTime":"2026-03-08T00:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.094560 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.094600 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.094612 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.094655 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.094668 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.200560 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.200603 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.200614 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.200630 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.200641 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.241665 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:49 crc kubenswrapper[4762]: E0308 00:24:49.241925 4762 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:49 crc kubenswrapper[4762]: E0308 00:24:49.242043 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs podName:a6d5f4b4-a877-45da-9fed-81885011430f nodeName:}" failed. No retries permitted until 2026-03-08 00:24:51.242018383 +0000 UTC m=+112.716162767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs") pod "network-metrics-daemon-gdnwf" (UID: "a6d5f4b4-a877-45da-9fed-81885011430f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.262788 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.262850 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:49 crc kubenswrapper[4762]: E0308 00:24:49.264693 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:49 crc kubenswrapper[4762]: E0308 00:24:49.264594 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gdnwf" podUID="a6d5f4b4-a877-45da-9fed-81885011430f" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.303425 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.303472 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.303483 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.303499 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.303512 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.405871 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.405920 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.405932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.405950 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.405963 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.508276 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.508306 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.508317 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.508344 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.508357 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.611659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.611738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.612116 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.612170 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.612187 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.715628 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.715688 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.715714 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.715738 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.715755 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.819557 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.819612 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.819629 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.819658 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.819677 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.861910 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerStarted","Data":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.862652 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.862720 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.862739 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.869012 4762 generic.go:334] "Generic (PLEG): container finished" podID="7d2d9680-4c98-4b01-a49b-90766e2331b9" containerID="e0027101d29b976b48a029445d511bed9a1c5526b9b4ef79e5e879b599f5263d" exitCode=0 Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.869766 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerDied","Data":"e0027101d29b976b48a029445d511bed9a1c5526b9b4ef79e5e879b599f5263d"} Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.904073 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podStartSLOduration=48.904045535 podStartE2EDuration="48.904045535s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:49.902493676 +0000 UTC m=+111.376638100" watchObservedRunningTime="2026-03-08 00:24:49.904045535 +0000 UTC m=+111.378189959" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.914540 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.914880 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.922820 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.922871 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.922892 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.922946 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:49 crc kubenswrapper[4762]: I0308 00:24:49.922965 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:49Z","lastTransitionTime":"2026-03-08T00:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.025941 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.026033 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.026055 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.026094 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.026114 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.128916 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.128965 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.128982 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.129006 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.129023 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.231110 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.231146 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.231155 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.231169 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.231180 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.263102 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.263134 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:50 crc kubenswrapper[4762]: E0308 00:24:50.263287 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:50 crc kubenswrapper[4762]: E0308 00:24:50.263348 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.333903 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.333932 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.333940 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.333953 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.333961 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.437398 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.437461 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.437485 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.437514 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.437538 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.540329 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.540625 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.540800 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.540973 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.541098 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.644008 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.644084 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.644105 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.644130 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.644148 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.747584 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.747618 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.747627 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.747641 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.747650 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.851172 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.851262 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.851302 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.851336 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.851360 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.879510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xglnk" event={"ID":"7d2d9680-4c98-4b01-a49b-90766e2331b9","Type":"ContainerStarted","Data":"3f20f8d7eabe1fc62ef62f3cd58af8eab8f6fb7181b36b7af1679b0700e26a22"} Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.913039 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xglnk" podStartSLOduration=50.913008147 podStartE2EDuration="50.913008147s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:50.911475869 +0000 UTC m=+112.385620253" watchObservedRunningTime="2026-03-08 00:24:50.913008147 +0000 UTC m=+112.387152521" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.954903 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.954962 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.954981 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.955005 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:50 crc kubenswrapper[4762]: I0308 00:24:50.955026 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:50Z","lastTransitionTime":"2026-03-08T00:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.057716 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.057739 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.057747 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.057799 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.057810 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.159781 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.159805 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.159813 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.159826 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.159835 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.261016 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:51 crc kubenswrapper[4762]: E0308 00:24:51.261148 4762 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:51 crc kubenswrapper[4762]: E0308 00:24:51.261203 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs podName:a6d5f4b4-a877-45da-9fed-81885011430f nodeName:}" failed. No retries permitted until 2026-03-08 00:24:55.261186518 +0000 UTC m=+116.735330872 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs") pod "network-metrics-daemon-gdnwf" (UID: "a6d5f4b4-a877-45da-9fed-81885011430f") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262128 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:51 crc kubenswrapper[4762]: E0308 00:24:51.262236 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262274 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:51 crc kubenswrapper[4762]: E0308 00:24:51.262335 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gdnwf" podUID="a6d5f4b4-a877-45da-9fed-81885011430f" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262420 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262451 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262463 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262476 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.262487 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.364350 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.364389 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.364404 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.364423 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.364440 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.466676 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.466701 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.466709 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.466721 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.466729 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.569194 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.569221 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.569229 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.569611 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.569625 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.672654 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.672712 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.672734 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.672800 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.672830 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.698705 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gdnwf"] Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.775902 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.775943 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.775956 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.775975 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.775986 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.879586 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.879666 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.879693 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.879724 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.879762 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.882571 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:51 crc kubenswrapper[4762]: E0308 00:24:51.882817 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gdnwf" podUID="a6d5f4b4-a877-45da-9fed-81885011430f" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.983365 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.983416 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.983447 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.983468 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:51 crc kubenswrapper[4762]: I0308 00:24:51.983482 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:51Z","lastTransitionTime":"2026-03-08T00:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.082986 4762 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.086600 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.086671 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.086689 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.086714 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.086733 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.190482 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.190539 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.190555 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.190580 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.190601 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.262653 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.262654 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:52 crc kubenswrapper[4762]: E0308 00:24:52.262885 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:52 crc kubenswrapper[4762]: E0308 00:24:52.263090 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.293506 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.293580 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.293604 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.293635 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.293658 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.397634 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.397675 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.397690 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.397712 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.397728 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.501245 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.501290 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.501303 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.501320 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.501333 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.604887 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.604966 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.604985 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.605013 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.605031 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.708246 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.708307 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.708329 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.708355 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.708372 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.810647 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.810678 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.810686 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.810701 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.810710 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.913777 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.913837 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.913854 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.913879 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:52 crc kubenswrapper[4762]: I0308 00:24:52.913899 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:52Z","lastTransitionTime":"2026-03-08T00:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.017143 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.017632 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.017648 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.017671 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.017686 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.120799 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.120846 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.120866 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.120890 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.120906 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.261064 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.261160 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.261184 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.261216 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.261240 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.263614 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.263674 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:53 crc kubenswrapper[4762]: E0308 00:24:53.263872 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gdnwf" podUID="a6d5f4b4-a877-45da-9fed-81885011430f" Mar 08 00:24:53 crc kubenswrapper[4762]: E0308 00:24:53.264034 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.364423 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.364485 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.364502 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.364528 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.364545 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.468577 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.468642 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.468661 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.468689 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.468710 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.572145 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.572200 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.572217 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.572244 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.572261 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.676007 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.676089 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.676113 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.676144 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.676166 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.779167 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.779225 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.779247 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.779275 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.779297 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.881428 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.881515 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.881538 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.881568 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.881591 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.984345 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.984409 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.984427 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.984457 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:53 crc kubenswrapper[4762]: I0308 00:24:53.984474 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:53Z","lastTransitionTime":"2026-03-08T00:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.087596 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.087659 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.087681 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.087708 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.087726 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.189847 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.189907 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.189933 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.189964 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.189987 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.263129 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:54 crc kubenswrapper[4762]: E0308 00:24:54.263315 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.263367 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:54 crc kubenswrapper[4762]: E0308 00:24:54.263844 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.264199 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.293438 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.293486 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.293500 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.293521 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.293535 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.396186 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.396240 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.396256 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.396278 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.396292 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.499136 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.499178 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.499189 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.499204 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.499216 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.602031 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.602077 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.602088 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.602110 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.602122 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.704756 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.704843 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.704872 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.704899 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.704917 4762 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-08T00:24:54Z","lastTransitionTime":"2026-03-08T00:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.808513 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.808580 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.808597 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.808618 4762 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.808741 4762 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.884929 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jthcx"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.893963 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-frbvk"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.894593 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.895255 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.895838 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.896394 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.903436 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.904121 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.904502 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.904963 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nq4dh"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.905689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.904988 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.905252 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.906061 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.907817 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.908151 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.908174 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.908240 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.908673 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.909099 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.909355 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.911210 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.911860 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.912323 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.912684 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.912749 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913278 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913344 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913400 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913607 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913739 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913831 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.913878 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.919984 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.920208 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.920530 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.921223 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08"} Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.921268 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw6wd"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.921663 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.921919 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.923294 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.923456 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.923674 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.925548 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.926182 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.926407 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.926452 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.926888 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.927039 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.930081 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.930340 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-84dbj"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.930722 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nf2fd"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.931036 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.931071 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.931354 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.931437 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.931537 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.932386 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gm49w"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.932552 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.933083 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.949750 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.950845 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.951281 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.952191 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.952583 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.953408 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.953474 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.966198 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.966492 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.966846 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.967570 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.967575 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.967672 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.967778 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.967868 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.968065 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.968199 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.968416 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.971915 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.972263 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.972396 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.972535 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976360 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976360 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976559 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976617 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976683 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.976902 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977207 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977257 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977394 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977453 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977523 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977632 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977726 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.977792 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.978045 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.978233 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.978371 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.978699 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.979069 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.981131 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.981390 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.981490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.983319 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.984199 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.984574 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29548800-x52sd"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.985101 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.985986 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.986372 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.986841 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jthcx"] Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.990157 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 00:24:54 crc kubenswrapper[4762]: I0308 00:24:54.995889 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006702 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006772 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-dir\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006797 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/35bd79d9-3f11-4ed4-85b6-39711c51f58d-machine-approver-tls\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006814 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xm4c\" (UniqueName: \"kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006832 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9xkq\" (UniqueName: \"kubernetes.io/projected/1d484943-583d-493a-ab04-bf99847ff4c4-kube-api-access-x9xkq\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006865 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2076a9-3463-44f1-9c63-225f97d62769-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006882 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd304649-1e47-4aca-ac7f-31ce823babbb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006944 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vghvr\" (UniqueName: \"kubernetes.io/projected/6e8e8070-7d3f-4a58-b1ce-6f240bb0170d-kube-api-access-vghvr\") pod \"downloads-7954f5f757-84dbj\" (UID: \"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d\") " pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.006986 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-trusted-ca\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007012 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007037 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-config\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007062 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007085 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007111 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007136 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1d6b44-973f-4add-aadd-0dbbea83af1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007159 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007224 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48rsk\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-kube-api-access-48rsk\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007319 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-serving-cert\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007354 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt6tb\" (UniqueName: \"kubernetes.io/projected/bd304649-1e47-4aca-ac7f-31ce823babbb-kube-api-access-kt6tb\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007378 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007401 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007429 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007452 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-client\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007503 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-node-pullsecrets\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007591 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/041990e2-6203-42ce-b6fd-6882e21fc2a7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007623 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007648 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007696 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7pg\" (UniqueName: \"kubernetes.io/projected/281aeeb6-c1e4-4189-ae69-1b28741649d4-kube-api-access-8h7pg\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007728 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-config\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007754 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007818 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007843 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007884 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-image-import-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007910 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-encryption-config\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007931 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-images\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007952 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007975 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-serving-cert\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.007999 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1d6b44-973f-4add-aadd-0dbbea83af1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008023 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-metrics-tls\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008048 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008067 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008098 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008131 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phkph\" (UniqueName: \"kubernetes.io/projected/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-kube-api-access-phkph\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008155 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-config\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008181 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-serving-cert\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008202 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2076a9-3463-44f1-9c63-225f97d62769-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008224 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008245 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-encryption-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008267 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-client\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008288 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008309 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-audit-dir\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008329 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b54sj\" (UniqueName: \"kubernetes.io/projected/46ce5811-52d7-493a-a861-90d666c994ed-kube-api-access-b54sj\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008363 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-config\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008386 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008419 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008445 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwtf2\" (UniqueName: \"kubernetes.io/projected/f42edfa8-610d-4cdf-a0db-63d3ccad4615-kube-api-access-jwtf2\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008466 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008490 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957wg\" (UniqueName: \"kubernetes.io/projected/35bd79d9-3f11-4ed4-85b6-39711c51f58d-kube-api-access-957wg\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008510 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008530 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008554 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008577 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-policies\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008599 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42edfa8-610d-4cdf-a0db-63d3ccad4615-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008621 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008657 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-auth-proxy-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008679 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/041990e2-6203-42ce-b6fd-6882e21fc2a7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008698 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3224d2-e83a-4707-9e42-e13d68451af3-serving-cert\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008730 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-etcd-client\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008750 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-service-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008799 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtxjs\" (UniqueName: \"kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008822 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008846 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glqkt\" (UniqueName: \"kubernetes.io/projected/460afccf-5d2c-44d9-813e-41c06be89ab7-kube-api-access-glqkt\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008865 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l94sd\" (UniqueName: \"kubernetes.io/projected/9d3224d2-e83a-4707-9e42-e13d68451af3-kube-api-access-l94sd\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008934 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-audit\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008965 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.008987 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009034 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2076a9-3463-44f1-9c63-225f97d62769-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009056 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qj2\" (UniqueName: \"kubernetes.io/projected/bb1d6b44-973f-4add-aadd-0dbbea83af1d-kube-api-access-s4qj2\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009078 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009099 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnm4s\" (UniqueName: \"kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009120 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009141 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009162 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d484943-583d-493a-ab04-bf99847ff4c4-serving-cert\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.009592 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.010337 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.014517 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.014802 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.015370 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.016388 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.020203 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.020970 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.021244 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.021887 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.024896 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.025419 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-kn22k"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.025706 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.026137 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.026406 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.026409 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.037333 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.045852 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.054315 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.055334 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.057925 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.064704 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.064805 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.074917 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.075094 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.075235 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.076441 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.076566 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.076610 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.076736 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.076797 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.078217 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.078318 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.079046 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.079540 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.079903 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.080022 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.081349 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.081523 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.082254 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.082441 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.082886 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.083058 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.083959 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.084253 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.084459 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.084678 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.084786 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.084966 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.085172 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.085424 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.085580 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.085882 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.091787 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.092448 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.094199 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.095846 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.096434 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.096636 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.098468 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.102929 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.103463 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.106564 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.107407 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.107740 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.107957 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.108008 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.108139 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.108187 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.109242 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9g2xn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.109821 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.109987 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t95jr"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.110496 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112196 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9373f32-b3da-4942-83dd-490eb4d631fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112224 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-stats-auth\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112269 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-957wg\" (UniqueName: \"kubernetes.io/projected/35bd79d9-3f11-4ed4-85b6-39711c51f58d-kube-api-access-957wg\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112286 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwtf2\" (UniqueName: \"kubernetes.io/projected/f42edfa8-610d-4cdf-a0db-63d3ccad4615-kube-api-access-jwtf2\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112301 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112316 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112332 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-policies\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112348 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112363 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112379 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112402 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42edfa8-610d-4cdf-a0db-63d3ccad4615-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112416 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-auth-proxy-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112431 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/041990e2-6203-42ce-b6fd-6882e21fc2a7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112449 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3224d2-e83a-4707-9e42-e13d68451af3-serving-cert\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112499 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-etcd-client\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112516 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-service-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112532 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9373f32-b3da-4942-83dd-490eb4d631fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtxjs\" (UniqueName: \"kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112578 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112594 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-default-certificate\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112609 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-audit\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112626 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112643 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e0bf995-5e45-4c39-8fbd-068691ed47bb-proxy-tls\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112661 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glqkt\" (UniqueName: \"kubernetes.io/projected/460afccf-5d2c-44d9-813e-41c06be89ab7-kube-api-access-glqkt\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112677 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l94sd\" (UniqueName: \"kubernetes.io/projected/9d3224d2-e83a-4707-9e42-e13d68451af3-kube-api-access-l94sd\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112693 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e0bf995-5e45-4c39-8fbd-068691ed47bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112711 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt7c\" (UniqueName: \"kubernetes.io/projected/f9373f32-b3da-4942-83dd-490eb4d631fa-kube-api-access-5xt7c\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112728 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49b045c-2159-4fd1-b0de-fcf1453e6adb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112745 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112761 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnb44\" (UniqueName: \"kubernetes.io/projected/6d729469-fc86-4275-9f27-df601b6a1700-kube-api-access-hnb44\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112792 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da8e6450-ebd3-47b0-9153-1deebe16432f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112811 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-metrics-certs\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112833 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2076a9-3463-44f1-9c63-225f97d62769-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112890 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f5cbd39f-952e-4664-8994-4b2dd4162b25-proxy-tls\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112912 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnm4s\" (UniqueName: \"kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112928 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112943 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a49b045c-2159-4fd1-b0de-fcf1453e6adb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112961 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112979 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d484943-583d-493a-ab04-bf99847ff4c4-serving-cert\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.112994 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qj2\" (UniqueName: \"kubernetes.io/projected/bb1d6b44-973f-4add-aadd-0dbbea83af1d-kube-api-access-s4qj2\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113009 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113032 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113071 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-dir\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113086 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/35bd79d9-3f11-4ed4-85b6-39711c51f58d-machine-approver-tls\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113101 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xm4c\" (UniqueName: \"kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113116 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113135 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwgf2\" (UniqueName: \"kubernetes.io/projected/4e0bf995-5e45-4c39-8fbd-068691ed47bb-kube-api-access-bwgf2\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113153 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd304649-1e47-4aca-ac7f-31ce823babbb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113169 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcwn\" (UniqueName: \"kubernetes.io/projected/47ea3169-322b-4246-9a87-515ba6b49133-kube-api-access-fmcwn\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113186 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vghvr\" (UniqueName: \"kubernetes.io/projected/6e8e8070-7d3f-4a58-b1ce-6f240bb0170d-kube-api-access-vghvr\") pod \"downloads-7954f5f757-84dbj\" (UID: \"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d\") " pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113200 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-trusted-ca\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113216 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9xkq\" (UniqueName: \"kubernetes.io/projected/1d484943-583d-493a-ab04-bf99847ff4c4-kube-api-access-x9xkq\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113230 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2076a9-3463-44f1-9c63-225f97d62769-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113245 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-config\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113259 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113288 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1d6b44-973f-4add-aadd-0dbbea83af1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113305 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113322 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113337 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113352 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48rsk\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-kube-api-access-48rsk\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113368 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-serving-cert\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113384 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt6tb\" (UniqueName: \"kubernetes.io/projected/bd304649-1e47-4aca-ac7f-31ce823babbb-kube-api-access-kt6tb\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113402 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d729469-fc86-4275-9f27-df601b6a1700-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113418 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsxsb\" (UniqueName: \"kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113434 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113448 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113463 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113478 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-client\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113494 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d729469-fc86-4275-9f27-df601b6a1700-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113510 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htklq\" (UniqueName: \"kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113526 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/041990e2-6203-42ce-b6fd-6882e21fc2a7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-node-pullsecrets\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113556 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113573 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113589 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113605 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113621 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-config\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113637 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7pg\" (UniqueName: \"kubernetes.io/projected/281aeeb6-c1e4-4189-ae69-1b28741649d4-kube-api-access-8h7pg\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113653 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a49b045c-2159-4fd1-b0de-fcf1453e6adb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113669 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113692 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-image-import-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113708 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-encryption-config\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113724 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-images\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113740 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113760 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113790 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ea3169-322b-4246-9a87-515ba6b49133-service-ca-bundle\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113806 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1d6b44-973f-4add-aadd-0dbbea83af1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113822 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113836 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-serving-cert\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113852 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113868 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-metrics-tls\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113891 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113906 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113924 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmdt\" (UniqueName: \"kubernetes.io/projected/f5cbd39f-952e-4664-8994-4b2dd4162b25-kube-api-access-zxmdt\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113946 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phkph\" (UniqueName: \"kubernetes.io/projected/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-kube-api-access-phkph\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113970 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-config\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.113992 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114013 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da8e6450-ebd3-47b0-9153-1deebe16432f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114032 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114051 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2076a9-3463-44f1-9c63-225f97d62769-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114066 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114083 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-encryption-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114098 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-serving-cert\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114114 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-images\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114130 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114146 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-audit-dir\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114164 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b54sj\" (UniqueName: \"kubernetes.io/projected/46ce5811-52d7-493a-a861-90d666c994ed-kube-api-access-b54sj\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114187 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-client\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114202 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-config\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114218 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrdc\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-kube-api-access-hwrdc\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114250 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114267 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114811 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-service-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.114932 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.115407 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-audit\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.115976 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx8np"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.116083 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.116608 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49qkg"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.116728 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7a2076a9-3463-44f1-9c63-225f97d62769-service-ca\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.117096 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.117243 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.117580 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-policies\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.117864 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7a2076a9-3463-44f1-9c63-225f97d62769-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.118517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.119442 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/041990e2-6203-42ce-b6fd-6882e21fc2a7-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.119446 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-trusted-ca\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.120040 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-auth-proxy-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.120120 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.120747 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-image-import-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.121480 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.121533 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.121553 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d484943-583d-493a-ab04-bf99847ff4c4-config\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.121608 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.121860 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-images\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.122102 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.127547 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.129319 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35bd79d9-3f11-4ed4-85b6-39711c51f58d-config\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.130467 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-ca\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.131704 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/460afccf-5d2c-44d9-813e-41c06be89ab7-audit-dir\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.135766 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.137359 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb1d6b44-973f-4add-aadd-0dbbea83af1d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.139025 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.139185 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.139415 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.139506 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/35bd79d9-3f11-4ed4-85b6-39711c51f58d-machine-approver-tls\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.139494 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-audit-dir\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141238 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-encryption-config\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141634 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141679 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nq4dh"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141694 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141875 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d3224d2-e83a-4707-9e42-e13d68451af3-serving-cert\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.141976 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-etcd-client\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142081 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142120 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d484943-583d-493a-ab04-bf99847ff4c4-serving-cert\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142135 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bb1d6b44-973f-4add-aadd-0dbbea83af1d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142542 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-serving-cert\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142591 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/46ce5811-52d7-493a-a861-90d666c994ed-node-pullsecrets\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.142725 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/281aeeb6-c1e4-4189-ae69-1b28741649d4-config\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.143163 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-encryption-config\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.143179 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-config\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.143180 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.143881 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d3224d2-e83a-4707-9e42-e13d68451af3-service-ca-bundle\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.144220 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f42edfa8-610d-4cdf-a0db-63d3ccad4615-config\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.144520 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.145080 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/bd304649-1e47-4aca-ac7f-31ce823babbb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.145539 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/46ce5811-52d7-493a-a861-90d666c994ed-serving-cert\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.145642 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.146010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f42edfa8-610d-4cdf-a0db-63d3ccad4615-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.146182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a2076a9-3463-44f1-9c63-225f97d62769-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.148046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.153593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.154022 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.163572 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.165043 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.165519 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.165825 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.168750 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8lcsp"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169210 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169305 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169376 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.170895 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169550 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/46ce5811-52d7-493a-a861-90d666c994ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169532 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169808 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/281aeeb6-c1e4-4189-ae69-1b28741649d4-etcd-client\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169859 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-metrics-tls\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.169978 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw6wd"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.171496 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-serving-cert\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.171799 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-frbvk"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.171870 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.172206 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.172519 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/460afccf-5d2c-44d9-813e-41c06be89ab7-etcd-client\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.144568 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.173369 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.173607 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.173666 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.175636 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.175869 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/041990e2-6203-42ce-b6fd-6882e21fc2a7-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.177582 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.178146 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.181555 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29548800-x52sd"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.185631 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nf2fd"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.188856 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.188922 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-84dbj"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.189742 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.190612 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.191563 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.192446 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.193397 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.194315 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.195233 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.196155 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.197087 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.197375 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.197924 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.198882 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.200152 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k687p"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.202162 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.202357 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-77vgj"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.203077 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.202325 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.203375 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.203433 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t95jr"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.204495 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.205221 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx8np"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.206706 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49qkg"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.207094 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.208213 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.208866 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.210072 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8lcsp"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.211276 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gm49w"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.212736 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.213182 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k687p"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215453 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215502 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-77vgj"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215547 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmcwn\" (UniqueName: \"kubernetes.io/projected/47ea3169-322b-4246-9a87-515ba6b49133-kube-api-access-fmcwn\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215621 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d729469-fc86-4275-9f27-df601b6a1700-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215648 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsxsb\" (UniqueName: \"kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215669 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d729469-fc86-4275-9f27-df601b6a1700-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215690 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv4ch\" (UniqueName: \"kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215712 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215728 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htklq\" (UniqueName: \"kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215750 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a49b045c-2159-4fd1-b0de-fcf1453e6adb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215792 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ea3169-322b-4246-9a87-515ba6b49133-service-ca-bundle\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215812 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmdt\" (UniqueName: \"kubernetes.io/projected/f5cbd39f-952e-4664-8994-4b2dd4162b25-kube-api-access-zxmdt\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215838 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da8e6450-ebd3-47b0-9153-1deebe16432f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215852 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215868 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-images\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215886 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.215915 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrdc\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-kube-api-access-hwrdc\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217192 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217004 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d729469-fc86-4275-9f27-df601b6a1700-config\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217286 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9g2xn"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217318 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217334 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-stats-auth\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217049 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-auth-proxy-config\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9373f32-b3da-4942-83dd-490eb4d631fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217659 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9373f32-b3da-4942-83dd-490eb4d631fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217689 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-default-certificate\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217705 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e0bf995-5e45-4c39-8fbd-068691ed47bb-proxy-tls\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217750 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e0bf995-5e45-4c39-8fbd-068691ed47bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217800 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xt7c\" (UniqueName: \"kubernetes.io/projected/f9373f32-b3da-4942-83dd-490eb4d631fa-kube-api-access-5xt7c\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217844 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49b045c-2159-4fd1-b0de-fcf1453e6adb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217864 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnb44\" (UniqueName: \"kubernetes.io/projected/6d729469-fc86-4275-9f27-df601b6a1700-kube-api-access-hnb44\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217879 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da8e6450-ebd3-47b0-9153-1deebe16432f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217911 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f5cbd39f-952e-4664-8994-4b2dd4162b25-proxy-tls\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217927 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-metrics-certs\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.217942 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a49b045c-2159-4fd1-b0de-fcf1453e6adb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.218049 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.218074 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwgf2\" (UniqueName: \"kubernetes.io/projected/4e0bf995-5e45-4c39-8fbd-068691ed47bb-kube-api-access-bwgf2\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.218090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.218313 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-xq6ct"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.218492 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4e0bf995-5e45-4c39-8fbd-068691ed47bb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.219046 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.219488 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ghr8q"] Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.220079 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.220469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.220693 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d729469-fc86-4275-9f27-df601b6a1700-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.242643 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.249882 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da8e6450-ebd3-47b0-9153-1deebe16432f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.257612 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.261881 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.262186 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.262196 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.270847 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/da8e6450-ebd3-47b0-9153-1deebe16432f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.277305 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.297584 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.318941 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.319919 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.320132 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv4ch\" (UniqueName: \"kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.320384 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.320497 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.326800 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f5cbd39f-952e-4664-8994-4b2dd4162b25-images\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.339929 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.359209 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.377416 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f5cbd39f-952e-4664-8994-4b2dd4162b25-proxy-tls\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.388376 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.399641 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.417788 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.431448 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a49b045c-2159-4fd1-b0de-fcf1453e6adb-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.438001 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.448530 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a49b045c-2159-4fd1-b0de-fcf1453e6adb-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.457871 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.477666 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.497514 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.510348 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.522380 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.531240 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.539410 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.558708 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.572480 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4e0bf995-5e45-4c39-8fbd-068691ed47bb-proxy-tls\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.578616 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.597963 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.607457 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47ea3169-322b-4246-9a87-515ba6b49133-service-ca-bundle\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.618226 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.637709 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.658037 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.676029 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-default-certificate\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.679048 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.694236 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-stats-auth\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.698218 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.713470 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/47ea3169-322b-4246-9a87-515ba6b49133-metrics-certs\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.719378 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.738415 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.758594 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.779883 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.792755 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9373f32-b3da-4942-83dd-490eb4d631fa-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.799193 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.808516 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9373f32-b3da-4942-83dd-490eb4d631fa-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.819104 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.837955 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.858904 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.878695 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.898465 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.918689 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.938371 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.957448 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.978108 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 00:24:55 crc kubenswrapper[4762]: I0308 00:24:55.999276 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.018190 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.039975 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.059181 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.085116 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.096307 4762 request.go:700] Waited for 1.014332832s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.098215 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.118310 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.138539 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.158591 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.178556 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.198852 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.220874 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.237549 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.258863 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.262295 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.262318 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.264001 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.278403 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.299050 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.318589 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 00:24:56 crc kubenswrapper[4762]: E0308 00:24:56.321153 4762 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: failed to sync secret cache: timed out waiting for the condition Mar 08 00:24:56 crc kubenswrapper[4762]: E0308 00:24:56.321255 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs podName:a6d5f4b4-a877-45da-9fed-81885011430f nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.321228161 +0000 UTC m=+125.795372525 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs") pod "network-metrics-daemon-gdnwf" (UID: "a6d5f4b4-a877-45da-9fed-81885011430f") : failed to sync secret cache: timed out waiting for the condition Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.322204 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.338591 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.359341 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.399084 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.418906 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.439819 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.458196 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.478178 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.499083 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.517946 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.538884 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.558379 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.578833 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.598931 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.618977 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.638786 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.661410 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.700556 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vghvr\" (UniqueName: \"kubernetes.io/projected/6e8e8070-7d3f-4a58-b1ce-6f240bb0170d-kube-api-access-vghvr\") pod \"downloads-7954f5f757-84dbj\" (UID: \"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d\") " pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.715414 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtxjs\" (UniqueName: \"kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs\") pod \"console-f9d7485db-842sk\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.726734 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.738085 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glqkt\" (UniqueName: \"kubernetes.io/projected/460afccf-5d2c-44d9-813e-41c06be89ab7-kube-api-access-glqkt\") pod \"apiserver-7bbb656c7d-h5bzn\" (UID: \"460afccf-5d2c-44d9-813e-41c06be89ab7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.752955 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l94sd\" (UniqueName: \"kubernetes.io/projected/9d3224d2-e83a-4707-9e42-e13d68451af3-kube-api-access-l94sd\") pod \"authentication-operator-69f744f599-nq4dh\" (UID: \"9d3224d2-e83a-4707-9e42-e13d68451af3\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.783122 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnm4s\" (UniqueName: \"kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s\") pod \"controller-manager-879f6c89f-vw2gn\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.785696 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.798059 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-957wg\" (UniqueName: \"kubernetes.io/projected/35bd79d9-3f11-4ed4-85b6-39711c51f58d-kube-api-access-957wg\") pod \"machine-approver-56656f9798-q44mn\" (UID: \"35bd79d9-3f11-4ed4-85b6-39711c51f58d\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.810842 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.819497 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.820913 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwtf2\" (UniqueName: \"kubernetes.io/projected/f42edfa8-610d-4cdf-a0db-63d3ccad4615-kube-api-access-jwtf2\") pod \"machine-api-operator-5694c8668f-frbvk\" (UID: \"f42edfa8-610d-4cdf-a0db-63d3ccad4615\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.863572 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9xkq\" (UniqueName: \"kubernetes.io/projected/1d484943-583d-493a-ab04-bf99847ff4c4-kube-api-access-x9xkq\") pod \"console-operator-58897d9998-tw6wd\" (UID: \"1d484943-583d-493a-ab04-bf99847ff4c4\") " pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.875669 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7a2076a9-3463-44f1-9c63-225f97d62769-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-wltst\" (UID: \"7a2076a9-3463-44f1-9c63-225f97d62769\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.891972 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.898457 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.902347 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qj2\" (UniqueName: \"kubernetes.io/projected/bb1d6b44-973f-4add-aadd-0dbbea83af1d-kube-api-access-s4qj2\") pod \"openshift-controller-manager-operator-756b6f6bc6-hv69z\" (UID: \"bb1d6b44-973f-4add-aadd-0dbbea83af1d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.919091 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.934658 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" event={"ID":"35bd79d9-3f11-4ed4-85b6-39711c51f58d","Type":"ContainerStarted","Data":"3f9a454debf2dd7393b6a38b27ba50558942078d18d0bd72a6fdf928d987e02d"} Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.938320 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.959460 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.968514 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-84dbj"] Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.978487 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.986196 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nq4dh"] Mar 08 00:24:56 crc kubenswrapper[4762]: I0308 00:24:56.992933 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:56.999876 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.001184 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:57 crc kubenswrapper[4762]: W0308 00:24:57.012164 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d3224d2_e83a_4707_9e42_e13d68451af3.slice/crio-111a0d31c56ce31a771492d727ccecd2309a5b170ee773e70a7979401e7a4113 WatchSource:0}: Error finding container 111a0d31c56ce31a771492d727ccecd2309a5b170ee773e70a7979401e7a4113: Status 404 returned error can't find the container with id 111a0d31c56ce31a771492d727ccecd2309a5b170ee773e70a7979401e7a4113 Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.026374 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.034444 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.044445 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xm4c\" (UniqueName: \"kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c\") pod \"oauth-openshift-558db77b4-5pmsg\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.057715 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.059415 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.075787 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.081106 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b54sj\" (UniqueName: \"kubernetes.io/projected/46ce5811-52d7-493a-a861-90d666c994ed-kube-api-access-b54sj\") pod \"apiserver-76f77b778f-jthcx\" (UID: \"46ce5811-52d7-493a-a861-90d666c994ed\") " pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.096359 4762 request.go:700] Waited for 1.952482879s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.133784 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.138147 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phkph\" (UniqueName: \"kubernetes.io/projected/18f4f24d-5e64-4cbb-b6f2-59b836b022e0-kube-api-access-phkph\") pod \"dns-operator-744455d44c-nf2fd\" (UID: \"18f4f24d-5e64-4cbb-b6f2-59b836b022e0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.139127 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt6tb\" (UniqueName: \"kubernetes.io/projected/bd304649-1e47-4aca-ac7f-31ce823babbb-kube-api-access-kt6tb\") pod \"cluster-samples-operator-665b6dd947-zdv55\" (UID: \"bd304649-1e47-4aca-ac7f-31ce823babbb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.154563 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48rsk\" (UniqueName: \"kubernetes.io/projected/041990e2-6203-42ce-b6fd-6882e21fc2a7-kube-api-access-48rsk\") pod \"cluster-image-registry-operator-dc59b4c8b-c78f7\" (UID: \"041990e2-6203-42ce-b6fd-6882e21fc2a7\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.154581 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7pg\" (UniqueName: \"kubernetes.io/projected/281aeeb6-c1e4-4189-ae69-1b28741649d4-kube-api-access-8h7pg\") pod \"etcd-operator-b45778765-gm49w\" (UID: \"281aeeb6-c1e4-4189-ae69-1b28741649d4\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.159295 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: W0308 00:24:57.162171 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a2076a9_3463_44f1_9c63_225f97d62769.slice/crio-9ab52427b67e484771062b75191ee69894c29c8355982d786e9bf8da33879f32 WatchSource:0}: Error finding container 9ab52427b67e484771062b75191ee69894c29c8355982d786e9bf8da33879f32: Status 404 returned error can't find the container with id 9ab52427b67e484771062b75191ee69894c29c8355982d786e9bf8da33879f32 Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.179418 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.204974 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.223371 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.258017 4762 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.277626 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.287992 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.297786 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.312792 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.316161 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.319379 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.319613 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.339396 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.339471 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.346195 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.354932 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.358671 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.396576 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htklq\" (UniqueName: \"kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq\") pod \"marketplace-operator-79b997595-mg6jl\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.399966 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.414444 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsxsb\" (UniqueName: \"kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb\") pod \"image-pruner-29548800-x52sd\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.439451 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmcwn\" (UniqueName: \"kubernetes.io/projected/47ea3169-322b-4246-9a87-515ba6b49133-kube-api-access-fmcwn\") pod \"router-default-5444994796-kn22k\" (UID: \"47ea3169-322b-4246-9a87-515ba6b49133\") " pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.458157 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.475064 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmdt\" (UniqueName: \"kubernetes.io/projected/f5cbd39f-952e-4664-8994-4b2dd4162b25-kube-api-access-zxmdt\") pod \"machine-config-operator-74547568cd-mgmkn\" (UID: \"f5cbd39f-952e-4664-8994-4b2dd4162b25\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.478825 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.497778 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrdc\" (UniqueName: \"kubernetes.io/projected/da8e6450-ebd3-47b0-9153-1deebe16432f-kube-api-access-hwrdc\") pod \"ingress-operator-5b745b69d9-2sqjk\" (UID: \"da8e6450-ebd3-47b0-9153-1deebe16432f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.522347 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xt7c\" (UniqueName: \"kubernetes.io/projected/f9373f32-b3da-4942-83dd-490eb4d631fa-kube-api-access-5xt7c\") pod \"kube-storage-version-migrator-operator-b67b599dd-7nw7k\" (UID: \"f9373f32-b3da-4942-83dd-490eb4d631fa\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.536039 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnb44\" (UniqueName: \"kubernetes.io/projected/6d729469-fc86-4275-9f27-df601b6a1700-kube-api-access-hnb44\") pod \"openshift-apiserver-operator-796bbdcf4f-kdmr2\" (UID: \"6d729469-fc86-4275-9f27-df601b6a1700\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.559182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a49b045c-2159-4fd1-b0de-fcf1453e6adb-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gppmf\" (UID: \"a49b045c-2159-4fd1-b0de-fcf1453e6adb\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.579278 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.579713 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-frbvk"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.580642 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.584936 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwgf2\" (UniqueName: \"kubernetes.io/projected/4e0bf995-5e45-4c39-8fbd-068691ed47bb-kube-api-access-bwgf2\") pod \"machine-config-controller-84d6567774-vnmhm\" (UID: \"4e0bf995-5e45-4c39-8fbd-068691ed47bb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.587822 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tw6wd"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.598831 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.619271 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.640167 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.662854 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.665072 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.669997 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.677086 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.681889 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.684389 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.693189 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.699379 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.706864 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.714790 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.718778 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.722571 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.779309 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.779568 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv4ch\" (UniqueName: \"kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch\") pod \"collect-profiles-29548815-whwnr\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.791204 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.799017 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 00:24:57 crc kubenswrapper[4762]: W0308 00:24:57.808902 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47ea3169_322b_4246_9a87_515ba6b49133.slice/crio-411c4bc14d78547d8a67b2a3a1c20ecb0ddd4dab8f018d4bd934950a54dcb1de WatchSource:0}: Error finding container 411c4bc14d78547d8a67b2a3a1c20ecb0ddd4dab8f018d4bd934950a54dcb1de: Status 404 returned error can't find the container with id 411c4bc14d78547d8a67b2a3a1c20ecb0ddd4dab8f018d4bd934950a54dcb1de Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.821780 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.824336 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.825784 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jthcx"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865237 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865559 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865630 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a713e9e-1823-4b89-a74c-922ed73cdd15-config\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865647 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-profile-collector-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865665 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xzn\" (UniqueName: \"kubernetes.io/projected/ec25fa7b-0753-4de2-8744-386eee28051a-kube-api-access-29xzn\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865705 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-746kw\" (UniqueName: \"kubernetes.io/projected/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-kube-api-access-746kw\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.865721 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866211 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-webhook-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866241 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78fn5\" (UniqueName: \"kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866338 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-key\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866357 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866376 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njxsf\" (UniqueName: \"kubernetes.io/projected/a1b71198-134e-4cec-9f0b-b28979adf785-kube-api-access-njxsf\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866477 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/68ab86d6-f824-445d-b441-b7cbba73630b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866524 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-apiservice-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f160b607-f21e-4909-b6f5-a6756e9c6241-config\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866611 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866633 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/04980224-fe82-485b-83f9-9c3d30b196db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866711 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65rz7\" (UniqueName: \"kubernetes.io/projected/4de5942e-acf8-4138-acc3-42c177a7f997-kube-api-access-65rz7\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.866749 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdntt\" (UniqueName: \"kubernetes.io/projected/741e90e6-8de3-4054-94cf-7ada0da0e454-kube-api-access-jdntt\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867706 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867750 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f160b607-f21e-4909-b6f5-a6756e9c6241-serving-cert\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867830 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2mjr\" (UniqueName: \"kubernetes.io/projected/af536ee8-823b-4496-b7b5-b9dee6b9c957-kube-api-access-t2mjr\") pod \"migrator-59844c95c7-wd6hs\" (UID: \"af536ee8-823b-4496-b7b5-b9dee6b9c957\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867849 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a713e9e-1823-4b89-a74c-922ed73cdd15-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867913 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcn7n\" (UniqueName: \"kubernetes.io/projected/68ab86d6-f824-445d-b441-b7cbba73630b-kube-api-access-rcn7n\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.867952 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdvkh\" (UniqueName: \"kubernetes.io/projected/f160b607-f21e-4909-b6f5-a6756e9c6241-kube-api-access-kdvkh\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868309 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9970d4db-1af9-4970-a930-c469cc02bf9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868417 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-srv-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868774 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1b71198-134e-4cec-9f0b-b28979adf785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868815 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2wd\" (UniqueName: \"kubernetes.io/projected/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-kube-api-access-7g2wd\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868871 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-tmpfs\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868904 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b71198-134e-4cec-9f0b-b28979adf785-serving-cert\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868983 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.868999 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9970d4db-1af9-4970-a930-c469cc02bf9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.873852 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a713e9e-1823-4b89-a74c-922ed73cdd15-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.873901 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4pc\" (UniqueName: \"kubernetes.io/projected/04980224-fe82-485b-83f9-9c3d30b196db-kube-api-access-jt4pc\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.873949 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjb9r\" (UniqueName: \"kubernetes.io/projected/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-kube-api-access-vjb9r\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.873974 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874014 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec25fa7b-0753-4de2-8744-386eee28051a-cert\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874052 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874070 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-cabundle\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874109 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874138 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874161 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-srv-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874201 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9970d4db-1af9-4970-a930-c469cc02bf9f-config\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874254 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.874844 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwzwz\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: E0308 00:24:57.880499 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.380484246 +0000 UTC m=+119.854628590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.899374 4762 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.949176 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nf2fd"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.963598 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" event={"ID":"7a2076a9-3463-44f1-9c63-225f97d62769","Type":"ContainerStarted","Data":"3eb2eef649e6db4ebffa71e40eb0f8a3f13af0e417efa0063bb8dc98d92a2b71"} Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.963728 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" event={"ID":"7a2076a9-3463-44f1-9c63-225f97d62769","Type":"ContainerStarted","Data":"9ab52427b67e484771062b75191ee69894c29c8355982d786e9bf8da33879f32"} Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.966671 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.967709 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" event={"ID":"62e4d886-779c-4931-87f7-370090b02132","Type":"ContainerStarted","Data":"9d30a884a536fb7b0fb52ff8332cd077d39576bdc98522824cf478872a01702f"} Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.976378 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977041 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt4pc\" (UniqueName: \"kubernetes.io/projected/04980224-fe82-485b-83f9-9c3d30b196db-kube-api-access-jt4pc\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977067 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjb9r\" (UniqueName: \"kubernetes.io/projected/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-kube-api-access-vjb9r\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977082 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977116 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec25fa7b-0753-4de2-8744-386eee28051a-cert\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977140 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-cabundle\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977155 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977189 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977204 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-srv-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977221 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9970d4db-1af9-4970-a930-c469cc02bf9f-config\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977237 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977282 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-registration-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977302 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a34caab-79f1-437e-be12-817faf4e8917-config-volume\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977333 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-certs\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977352 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwzwz\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977371 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5tz2\" (UniqueName: \"kubernetes.io/projected/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-kube-api-access-h5tz2\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977390 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977422 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977438 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-plugins-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977455 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a713e9e-1823-4b89-a74c-922ed73cdd15-config\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977470 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-profile-collector-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977511 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xzn\" (UniqueName: \"kubernetes.io/projected/ec25fa7b-0753-4de2-8744-386eee28051a-kube-api-access-29xzn\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977539 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977578 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-746kw\" (UniqueName: \"kubernetes.io/projected/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-kube-api-access-746kw\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977599 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977617 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-node-bootstrap-token\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977650 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-key\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977666 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-webhook-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977683 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78fn5\" (UniqueName: \"kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977698 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-mountpoint-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977728 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977745 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njxsf\" (UniqueName: \"kubernetes.io/projected/a1b71198-134e-4cec-9f0b-b28979adf785-kube-api-access-njxsf\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977805 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a34caab-79f1-437e-be12-817faf4e8917-metrics-tls\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977831 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/68ab86d6-f824-445d-b441-b7cbba73630b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977874 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-apiservice-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977902 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f160b607-f21e-4909-b6f5-a6756e9c6241-config\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977953 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977971 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/04980224-fe82-485b-83f9-9c3d30b196db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.977986 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65rz7\" (UniqueName: \"kubernetes.io/projected/4de5942e-acf8-4138-acc3-42c177a7f997-kube-api-access-65rz7\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978018 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdntt\" (UniqueName: \"kubernetes.io/projected/741e90e6-8de3-4054-94cf-7ada0da0e454-kube-api-access-jdntt\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978037 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978052 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f160b607-f21e-4909-b6f5-a6756e9c6241-serving-cert\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978069 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978103 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a713e9e-1823-4b89-a74c-922ed73cdd15-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978119 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2mjr\" (UniqueName: \"kubernetes.io/projected/af536ee8-823b-4496-b7b5-b9dee6b9c957-kube-api-access-t2mjr\") pod \"migrator-59844c95c7-wd6hs\" (UID: \"af536ee8-823b-4496-b7b5-b9dee6b9c957\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978144 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcn7n\" (UniqueName: \"kubernetes.io/projected/68ab86d6-f824-445d-b441-b7cbba73630b-kube-api-access-rcn7n\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978177 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmr5j\" (UniqueName: \"kubernetes.io/projected/0a34caab-79f1-437e-be12-817faf4e8917-kube-api-access-pmr5j\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978197 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdvkh\" (UniqueName: \"kubernetes.io/projected/f160b607-f21e-4909-b6f5-a6756e9c6241-kube-api-access-kdvkh\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978216 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9970d4db-1af9-4970-a930-c469cc02bf9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978248 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1b71198-134e-4cec-9f0b-b28979adf785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978263 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-srv-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978279 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g2wd\" (UniqueName: \"kubernetes.io/projected/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-kube-api-access-7g2wd\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978296 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-tmpfs\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978327 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4r65\" (UniqueName: \"kubernetes.io/projected/be598188-2db1-4d79-9d80-cc03d00fef50-kube-api-access-s4r65\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978346 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b71198-134e-4cec-9f0b-b28979adf785-serving-cert\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978366 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhmz\" (UniqueName: \"kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978382 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9970d4db-1af9-4970-a930-c469cc02bf9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978441 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978460 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-csi-data-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978501 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-socket-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.978525 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a713e9e-1823-4b89-a74c-922ed73cdd15-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: E0308 00:24:57.980011 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.479982978 +0000 UTC m=+119.954127312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.981032 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" event={"ID":"35bd79d9-3f11-4ed4-85b6-39711c51f58d","Type":"ContainerStarted","Data":"a2a0371e82b682f865fda66bd5d070f2d874b5bc52ebc3f5625d296aeb108bfd"} Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.981076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" event={"ID":"35bd79d9-3f11-4ed4-85b6-39711c51f58d","Type":"ContainerStarted","Data":"ea98b94019394fbea8444a1e7cc0d63c2e962c08a4882eb854bde05fcd06a464"} Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.988595 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.989457 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.991717 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.991729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9970d4db-1af9-4970-a930-c469cc02bf9f-config\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.992250 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a1b71198-134e-4cec-9f0b-b28979adf785-available-featuregates\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.994688 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.994899 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a713e9e-1823-4b89-a74c-922ed73cdd15-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:57 crc kubenswrapper[4762]: I0308 00:24:57.997072 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ec25fa7b-0753-4de2-8744-386eee28051a-cert\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.000364 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-apiservice-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.001007 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.002069 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.003401 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/04980224-fe82-485b-83f9-9c3d30b196db-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.006193 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1b71198-134e-4cec-9f0b-b28979adf785-serving-cert\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.007713 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9970d4db-1af9-4970-a930-c469cc02bf9f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.008090 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-tmpfs\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.008932 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a713e9e-1823-4b89-a74c-922ed73cdd15-config\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.016004 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/68ab86d6-f824-445d-b441-b7cbba73630b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.017101 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.017122 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.017656 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-profile-collector-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.018149 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-cabundle\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.021441 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f160b607-f21e-4909-b6f5-a6756e9c6241-config\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.021610 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-srv-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.022056 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4de5942e-acf8-4138-acc3-42c177a7f997-profile-collector-cert\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.024359 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/741e90e6-8de3-4054-94cf-7ada0da0e454-srv-cert\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.024785 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.031887 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.033009 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gm49w"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.034453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt4pc\" (UniqueName: \"kubernetes.io/projected/04980224-fe82-485b-83f9-9c3d30b196db-kube-api-access-jt4pc\") pod \"package-server-manager-789f6589d5-nh2q6\" (UID: \"04980224-fe82-485b-83f9-9c3d30b196db\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.034842 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-signing-key\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.034875 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.040933 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-webhook-cert\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.043849 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f160b607-f21e-4909-b6f5-a6756e9c6241-serving-cert\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.056195 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" event={"ID":"9d3224d2-e83a-4707-9e42-e13d68451af3","Type":"ContainerStarted","Data":"8e4409c5d05bd5ef69554e617fce93f714a858353544ff940d7a031d6aa03879"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.056240 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" event={"ID":"9d3224d2-e83a-4707-9e42-e13d68451af3","Type":"ContainerStarted","Data":"111a0d31c56ce31a771492d727ccecd2309a5b170ee773e70a7979401e7a4113"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.061059 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" event={"ID":"46ce5811-52d7-493a-a861-90d666c994ed","Type":"ContainerStarted","Data":"5d2bdb38a619e2b594db82835de5948893e0de929a7cd7aa73f6c394d93354af"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.064584 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" event={"ID":"bb1d6b44-973f-4add-aadd-0dbbea83af1d","Type":"ContainerStarted","Data":"c633b9bc99ba474b2f6e97f95be9b7da03b2fd3e82ff16d420684ad52a2646d9"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.064649 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" event={"ID":"bb1d6b44-973f-4add-aadd-0dbbea83af1d","Type":"ContainerStarted","Data":"cfe4ba55e32a18b50d92933c7d42e6e0ad8a2a31eef19be6bb49f9d76a6680a1"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.067375 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-842sk" event={"ID":"ca82b7d9-bbba-4543-945b-e78923c1d3cf","Type":"ContainerStarted","Data":"ea57468b23b3a2bcd77f1ca1079d68f46c95a77d17ad82b727e9d4bc6539a1f6"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.067402 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-842sk" event={"ID":"ca82b7d9-bbba-4543-945b-e78923c1d3cf","Type":"ContainerStarted","Data":"40bf20bb4af8dbc20d6eac0e738b15cc87bafd6500471ed6914809cce7c56549"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.072513 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjb9r\" (UniqueName: \"kubernetes.io/projected/4e4e5ef1-02ce-403c-9903-0e9c734f8bdb-kube-api-access-vjb9r\") pod \"multus-admission-controller-857f4d67dd-fx8np\" (UID: \"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.078140 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kn22k" event={"ID":"47ea3169-322b-4246-9a87-515ba6b49133","Type":"ContainerStarted","Data":"411c4bc14d78547d8a67b2a3a1c20ecb0ddd4dab8f018d4bd934950a54dcb1de"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080115 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080150 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-node-bootstrap-token\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080174 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-mountpoint-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080215 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a34caab-79f1-437e-be12-817faf4e8917-metrics-tls\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080301 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080345 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmr5j\" (UniqueName: \"kubernetes.io/projected/0a34caab-79f1-437e-be12-817faf4e8917-kube-api-access-pmr5j\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080401 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4r65\" (UniqueName: \"kubernetes.io/projected/be598188-2db1-4d79-9d80-cc03d00fef50-kube-api-access-s4r65\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080422 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfhmz\" (UniqueName: \"kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080442 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080456 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-csi-data-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080496 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-socket-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080530 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080560 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-registration-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080585 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a34caab-79f1-437e-be12-817faf4e8917-config-volume\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080599 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-certs\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080622 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5tz2\" (UniqueName: \"kubernetes.io/projected/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-kube-api-access-h5tz2\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080639 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-plugins-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.080982 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-plugins-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.085250 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.085584 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.085653 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-csi-data-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.085859 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-socket-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.086538 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.586523473 +0000 UTC m=+120.060667817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.086909 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-mountpoint-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.087162 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-registration-dir\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.088207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.088610 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a34caab-79f1-437e-be12-817faf4e8917-config-volume\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.089454 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcn7n\" (UniqueName: \"kubernetes.io/projected/68ab86d6-f824-445d-b441-b7cbba73630b-kube-api-access-rcn7n\") pod \"control-plane-machine-set-operator-78cbb6b69f-vplbt\" (UID: \"68ab86d6-f824-445d-b441-b7cbba73630b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.093735 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.093863 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-node-bootstrap-token\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.099477 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdntt\" (UniqueName: \"kubernetes.io/projected/741e90e6-8de3-4054-94cf-7ada0da0e454-kube-api-access-jdntt\") pod \"olm-operator-6b444d44fb-vwrhn\" (UID: \"741e90e6-8de3-4054-94cf-7ada0da0e454\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.100359 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/be598188-2db1-4d79-9d80-cc03d00fef50-certs\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.102964 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0a34caab-79f1-437e-be12-817faf4e8917-metrics-tls\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.104119 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdvkh\" (UniqueName: \"kubernetes.io/projected/f160b607-f21e-4909-b6f5-a6756e9c6241-kube-api-access-kdvkh\") pod \"service-ca-operator-777779d784-49qkg\" (UID: \"f160b607-f21e-4909-b6f5-a6756e9c6241\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.106063 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" event={"ID":"1d1f4801-1613-4369-8d06-d0345df9703a","Type":"ContainerStarted","Data":"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.106099 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" event={"ID":"1d1f4801-1613-4369-8d06-d0345df9703a","Type":"ContainerStarted","Data":"643590ddda79e87c2dcfda3f15ea58940ddf7d937d4a63bdb0493273641fcdc8"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.106745 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.120209 4762 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-vw2gn container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.120300 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.129032 4762 generic.go:334] "Generic (PLEG): container finished" podID="460afccf-5d2c-44d9-813e-41c06be89ab7" containerID="0690c6b1b0789853cce68a2c19d8d2c426a5dc3f56551a7baede002ec26188b2" exitCode=0 Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.129381 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" event={"ID":"460afccf-5d2c-44d9-813e-41c06be89ab7","Type":"ContainerDied","Data":"0690c6b1b0789853cce68a2c19d8d2c426a5dc3f56551a7baede002ec26188b2"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.129412 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" event={"ID":"460afccf-5d2c-44d9-813e-41c06be89ab7","Type":"ContainerStarted","Data":"92e38356faabcf6eaef2f075c1821517ec3916cf763e722bf09885fd52982613"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.134702 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1a713e9e-1823-4b89-a74c-922ed73cdd15-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mft86\" (UID: \"1a713e9e-1823-4b89-a74c-922ed73cdd15\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.136091 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29548800-x52sd"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.141894 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-84dbj" event={"ID":"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d","Type":"ContainerStarted","Data":"89195bfcdaac8b8b5ad1df1c8fdb99747829a38dd538d179e0fa6390b90dfa72"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.141950 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-84dbj" event={"ID":"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d","Type":"ContainerStarted","Data":"4a98ce49866880fa021ab89001a094226d1b78302b4f8349d132f5cc1846238b"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.142352 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.144321 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" event={"ID":"1d484943-583d-493a-ab04-bf99847ff4c4","Type":"ContainerStarted","Data":"1eeb7a66256d8f33ea3c1dcc9616f543e7716c382ff61df3a4c68bded260c3ce"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.144376 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.144388 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" event={"ID":"1d484943-583d-493a-ab04-bf99847ff4c4","Type":"ContainerStarted","Data":"32da05fe78cfae7feaa8976ee734ff7ccd7baaf28ceb13047b7ada9e7211ea49"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.148961 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.149001 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.149493 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2mjr\" (UniqueName: \"kubernetes.io/projected/af536ee8-823b-4496-b7b5-b9dee6b9c957-kube-api-access-t2mjr\") pod \"migrator-59844c95c7-wd6hs\" (UID: \"af536ee8-823b-4496-b7b5-b9dee6b9c957\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.149531 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.151942 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.151980 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.154389 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" event={"ID":"f42edfa8-610d-4cdf-a0db-63d3ccad4615","Type":"ContainerStarted","Data":"c15bcb5524417f90a9d3526c09648238bebd69e5096990bd6b7d8b4349051d1f"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.154431 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" event={"ID":"f42edfa8-610d-4cdf-a0db-63d3ccad4615","Type":"ContainerStarted","Data":"c053c6ed829f74f8cadf989bea0a1a81982c253098289125d09cf278b88814d4"} Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.161385 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.166661 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.182286 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.183673 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.6836532 +0000 UTC m=+120.157797544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.185684 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xzn\" (UniqueName: \"kubernetes.io/projected/ec25fa7b-0753-4de2-8744-386eee28051a-kube-api-access-29xzn\") pod \"ingress-canary-8lcsp\" (UID: \"ec25fa7b-0753-4de2-8744-386eee28051a\") " pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.189221 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwzwz\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.193127 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.204185 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8lcsp" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.219432 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-746kw\" (UniqueName: \"kubernetes.io/projected/7a27cd53-cc43-4227-a15a-d55e0bfaf81d-kube-api-access-746kw\") pod \"packageserver-d55dfcdfc-88s4d\" (UID: \"7a27cd53-cc43-4227-a15a-d55e0bfaf81d\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.242107 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65rz7\" (UniqueName: \"kubernetes.io/projected/4de5942e-acf8-4138-acc3-42c177a7f997-kube-api-access-65rz7\") pod \"catalog-operator-68c6474976-dbz7x\" (UID: \"4de5942e-acf8-4138-acc3-42c177a7f997\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.270288 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g2wd\" (UniqueName: \"kubernetes.io/projected/cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13-kube-api-access-7g2wd\") pod \"service-ca-9c57cc56f-9g2xn\" (UID: \"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13\") " pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.286681 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.286996 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.786984733 +0000 UTC m=+120.261129077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.289783 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9970d4db-1af9-4970-a930-c469cc02bf9f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mns85\" (UID: \"9970d4db-1af9-4970-a930-c469cc02bf9f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.302033 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.313943 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78fn5\" (UniqueName: \"kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5\") pod \"route-controller-manager-6576b87f9c-jm2zf\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.315310 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.328592 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.335598 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.337170 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njxsf\" (UniqueName: \"kubernetes.io/projected/a1b71198-134e-4cec-9f0b-b28979adf785-kube-api-access-njxsf\") pod \"openshift-config-operator-7777fb866f-t95jr\" (UID: \"a1b71198-134e-4cec-9f0b-b28979adf785\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.343921 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.349644 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.359310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4r65\" (UniqueName: \"kubernetes.io/projected/be598188-2db1-4d79-9d80-cc03d00fef50-kube-api-access-s4r65\") pod \"machine-config-server-xq6ct\" (UID: \"be598188-2db1-4d79-9d80-cc03d00fef50\") " pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.359488 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.371460 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.377750 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfhmz\" (UniqueName: \"kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz\") pod \"cni-sysctl-allowlist-ds-ghr8q\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.388083 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.388738 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.888694075 +0000 UTC m=+120.362838409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.392010 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.395995 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.398035 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5tz2\" (UniqueName: \"kubernetes.io/projected/b2dce5bf-2a64-44af-bfe2-0a15fd5d357d-kube-api-access-h5tz2\") pod \"csi-hostpathplugin-k687p\" (UID: \"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d\") " pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.403094 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.413636 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmr5j\" (UniqueName: \"kubernetes.io/projected/0a34caab-79f1-437e-be12-817faf4e8917-kube-api-access-pmr5j\") pod \"dns-default-77vgj\" (UID: \"0a34caab-79f1-437e-be12-817faf4e8917\") " pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.429400 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" Mar 08 00:24:58 crc kubenswrapper[4762]: W0308 00:24:58.443654 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681404ff_89eb_420d_b1e2_6769d4b51636.slice/crio-d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063 WatchSource:0}: Error finding container d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063: Status 404 returned error can't find the container with id d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063 Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.460892 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.476434 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.493665 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.494689 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.495156 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:58.995144227 +0000 UTC m=+120.469288571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.518459 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-k687p" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.534657 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-77vgj" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.541912 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-xq6ct" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.547903 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.595938 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.596112 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.096082585 +0000 UTC m=+120.570226929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.596629 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.597711 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.097697586 +0000 UTC m=+120.571841930 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.679426 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fx8np"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.697280 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.697434 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.197402584 +0000 UTC m=+120.671546938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.697671 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.698137 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.198127207 +0000 UTC m=+120.672271551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.732175 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-49qkg"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.798873 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.799169 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.299151317 +0000 UTC m=+120.773295661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.807510 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8lcsp"] Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.899842 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:58 crc kubenswrapper[4762]: E0308 00:24:58.900501 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.400489438 +0000 UTC m=+120.874633782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:58 crc kubenswrapper[4762]: I0308 00:24:58.913878 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=30.913858741 podStartE2EDuration="30.913858741s" podCreationTimestamp="2026-03-08 00:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:24:58.877877221 +0000 UTC m=+120.352021565" watchObservedRunningTime="2026-03-08 00:24:58.913858741 +0000 UTC m=+120.388003085" Mar 08 00:24:58 crc kubenswrapper[4762]: W0308 00:24:58.935722 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e4e5ef1_02ce_403c_9903_0e9c734f8bdb.slice/crio-ee2e619c2c377da8ec9564948edba72111e0ac639c20de9c85ecc185005410b5 WatchSource:0}: Error finding container ee2e619c2c377da8ec9564948edba72111e0ac639c20de9c85ecc185005410b5: Status 404 returned error can't find the container with id ee2e619c2c377da8ec9564948edba72111e0ac639c20de9c85ecc185005410b5 Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.001259 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.001381 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.501360493 +0000 UTC m=+120.975504847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.002663 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.002978 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.502965744 +0000 UTC m=+120.977110088 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.103296 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.103653 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.603636633 +0000 UTC m=+121.077780977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.116387 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.184302 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-t95jr"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.197939 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" event={"ID":"6d729469-fc86-4275-9f27-df601b6a1700","Type":"ContainerStarted","Data":"6c7af87e3c69e5b9fd89ca6f98070eda166157b182f2ee5ce2145d9f2a45dc50"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.197990 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" event={"ID":"6d729469-fc86-4275-9f27-df601b6a1700","Type":"ContainerStarted","Data":"a5176ad6916a9a9d373c2a98cf1b0e20ab9594567c4c54692eaf437e30d43f07"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.222100 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" event={"ID":"62e4d886-779c-4931-87f7-370090b02132","Type":"ContainerStarted","Data":"b7fb236e6c44d73ccb8718946e751a7f3d78cadfab89932ae5f93ad61ed6f8a4"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.222872 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.228337 4762 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mg6jl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.228404 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.17:8080/healthz\": dial tcp 10.217.0.17:8080: connect: connection refused" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.233312 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.233850 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.733834858 +0000 UTC m=+121.207979202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.257375 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" event={"ID":"cf5ac2df-231b-4019-a6ad-a9485ee8802e","Type":"ContainerStarted","Data":"b67bb9155fbb001682129da5a6f1ceed81a1b9830563a4766887c717d2c39532"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.257439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" event={"ID":"cf5ac2df-231b-4019-a6ad-a9485ee8802e","Type":"ContainerStarted","Data":"22208a5ce329fbcf2e1ee1e7f07aa51a1194d1330087ab7367e8bffccf94f9fe"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.258765 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.265427 4762 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5pmsg container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.265490 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.293641 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42860: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.313454 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29548800-x52sd" event={"ID":"fe7222be-b489-4bcb-bc44-0a8933cde1c5","Type":"ContainerStarted","Data":"8151e2042ac21e0af491f452ea8917140a9c046741c15e70a9a133078ca8ea76"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.313499 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" event={"ID":"460afccf-5d2c-44d9-813e-41c06be89ab7","Type":"ContainerStarted","Data":"d0e382b47fe7a19acde921c65f743cf937f4508e0d8f4d7c8d14492ab22691b9"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.323203 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" event={"ID":"18f4f24d-5e64-4cbb-b6f2-59b836b022e0","Type":"ContainerStarted","Data":"44ec29988d7ca2ef6ca9d18ae0a6f9db9ea92db407d254828da18e1d9e664c05"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.323256 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" event={"ID":"18f4f24d-5e64-4cbb-b6f2-59b836b022e0","Type":"ContainerStarted","Data":"2dc588451e02e51d16115685c50ec3ed40fbf8b6a53a40ae54b6dc1a1c7095ec"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.334555 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.335707 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.835689414 +0000 UTC m=+121.309833758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.390604 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42872: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.447036 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.447337 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:24:59.947321281 +0000 UTC m=+121.421465615 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.465586 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" event={"ID":"041990e2-6203-42ce-b6fd-6882e21fc2a7","Type":"ContainerStarted","Data":"99ebbe30c23ed922888c30b4d49b6fe1b157259d3e2d7be50875bb883389724c"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.465651 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" event={"ID":"041990e2-6203-42ce-b6fd-6882e21fc2a7","Type":"ContainerStarted","Data":"d82cb31288f38a54a7039b99499360a97b830ab19d4da006a0a995c7d89a6e4c"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.490344 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42886: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.490423 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.522171 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" event={"ID":"f160b607-f21e-4909-b6f5-a6756e9c6241","Type":"ContainerStarted","Data":"4789c4902de18ba72d5a15b597ce90d7bf7085943c9e60152c95b014035efbff"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.536703 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8lcsp" event={"ID":"ec25fa7b-0753-4de2-8744-386eee28051a","Type":"ContainerStarted","Data":"d25cbb4954e4d6485f29246a1eb3442a3ed363980830e2f8d225c2aaf2538194"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.548856 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" event={"ID":"4e0bf995-5e45-4c39-8fbd-068691ed47bb","Type":"ContainerStarted","Data":"429d803de63e8e9281d010c6d693d82779ebed2a95e8da384d00bfa04dbf0bc8"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.562169 4762 generic.go:334] "Generic (PLEG): container finished" podID="46ce5811-52d7-493a-a861-90d666c994ed" containerID="ff84f95c4d1ab1318ee2c63023cdd88c9ed75de041a8809090eeaa54fc415ab4" exitCode=0 Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.562231 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" event={"ID":"46ce5811-52d7-493a-a861-90d666c994ed","Type":"ContainerDied","Data":"ff84f95c4d1ab1318ee2c63023cdd88c9ed75de041a8809090eeaa54fc415ab4"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.576516 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.577093 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.07707604 +0000 UTC m=+121.551220384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.641538 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42896: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.646512 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" event={"ID":"f5cbd39f-952e-4664-8994-4b2dd4162b25","Type":"ContainerStarted","Data":"3bfaffa9649f8a87fe29853e5163bb188079985e2e6bcfd3c5e864401d89887a"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.646545 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" event={"ID":"f5cbd39f-952e-4664-8994-4b2dd4162b25","Type":"ContainerStarted","Data":"1beb3595544dbcaea907fc0ff9f59770783cde4995371a0373a29d3e7152dc63"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.681478 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.685072 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.185053822 +0000 UTC m=+121.659198166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.706718 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" event={"ID":"281aeeb6-c1e4-4189-ae69-1b28741649d4","Type":"ContainerStarted","Data":"9ec63a156a9959c7673adb290d3275b617409ef7bdaa7a7cf64afae794f43a49"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.706757 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" event={"ID":"281aeeb6-c1e4-4189-ae69-1b28741649d4","Type":"ContainerStarted","Data":"a6032375124314f0b889fc62f1d6afe772495a83af0729efc1be63719112ce06"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.734606 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-9g2xn"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.735453 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-k687p"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.743737 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kn22k" event={"ID":"47ea3169-322b-4246-9a87-515ba6b49133","Type":"ContainerStarted","Data":"20dc728d30786c61357663c07671e3894813f2935a0a4dc5797eb9ba02b16e98"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.748980 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42898: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.788027 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.788361 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.288346354 +0000 UTC m=+121.762490708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.820404 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x"] Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.841510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" event={"ID":"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb","Type":"ContainerStarted","Data":"ee2e619c2c377da8ec9564948edba72111e0ac639c20de9c85ecc185005410b5"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.865068 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" event={"ID":"f42edfa8-610d-4cdf-a0db-63d3ccad4615","Type":"ContainerStarted","Data":"77b6e0ae96497b0fa1080130daf648a59db93589a0c7cb67cba4ec1e629c5e2f"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.892968 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:24:59 crc kubenswrapper[4762]: E0308 00:24:59.894658 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.394641851 +0000 UTC m=+121.868786195 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.946680 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42912: no serving certificate available for the kubelet" Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.947185 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" event={"ID":"47863e3b-949c-40f1-bdb3-2d940b78cda0","Type":"ContainerStarted","Data":"d96f1a91aac0066b56b4679f0b13c57e3d6ea049bb6e828c5514cac5916b6571"} Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.966406 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" event={"ID":"a49b045c-2159-4fd1-b0de-fcf1453e6adb","Type":"ContainerStarted","Data":"23383fcce5007db2f6537625de50ccf122fc0602262d7f0e618eda5ff420622f"} Mar 08 00:24:59 crc kubenswrapper[4762]: W0308 00:24:59.978262 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4de5942e_acf8_4138_acc3_42c177a7f997.slice/crio-0783494b92d381ed19cb99cb973f9fb9739dc73207cbee44a7de35df863e8a06 WatchSource:0}: Error finding container 0783494b92d381ed19cb99cb973f9fb9739dc73207cbee44a7de35df863e8a06: Status 404 returned error can't find the container with id 0783494b92d381ed19cb99cb973f9fb9739dc73207cbee44a7de35df863e8a06 Mar 08 00:24:59 crc kubenswrapper[4762]: I0308 00:24:59.999130 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" event={"ID":"681404ff-89eb-420d-b1e2-6769d4b51636","Type":"ContainerStarted","Data":"d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063"} Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.008998 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.009980 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.509954434 +0000 UTC m=+121.984098778 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.030564 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1b71198_134e_4cec_9f0b_b28979adf785.slice/crio-fb874f499a6b285303ded18c794e84df0779fb0aba9ec0240e9a4826f30b76b7\": RecentStats: unable to find data in memory cache]" Mar 08 00:25:00 crc kubenswrapper[4762]: W0308 00:25:00.052169 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb0f35af_fe8c_4f8f_8674_f8f60f6ffa13.slice/crio-4db697430a456d0d34707ba410eceb66ea86b9dde2dbefa53032666588767574 WatchSource:0}: Error finding container 4db697430a456d0d34707ba410eceb66ea86b9dde2dbefa53032666588767574: Status 404 returned error can't find the container with id 4db697430a456d0d34707ba410eceb66ea86b9dde2dbefa53032666588767574 Mar 08 00:25:00 crc kubenswrapper[4762]: W0308 00:25:00.076098 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2dce5bf_2a64_44af_bfe2_0a15fd5d357d.slice/crio-96fef50b1ad4b27bf5aa59b9b9285f78ae9c8591c999097e0aad8b8560455534 WatchSource:0}: Error finding container 96fef50b1ad4b27bf5aa59b9b9285f78ae9c8591c999097e0aad8b8560455534: Status 404 returned error can't find the container with id 96fef50b1ad4b27bf5aa59b9b9285f78ae9c8591c999097e0aad8b8560455534 Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.092216 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" event={"ID":"f9373f32-b3da-4942-83dd-490eb4d631fa","Type":"ContainerStarted","Data":"85b0846b5142fd74a3d8a9d46c08d749b3a6710fe47e95bc76a626ca21c47db4"} Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.114067 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.115817 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.615802847 +0000 UTC m=+122.089947191 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.142460 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" event={"ID":"bd304649-1e47-4aca-ac7f-31ce823babbb","Type":"ContainerStarted","Data":"acaa3f9330cb8a0c334edb5f7d419c81bf5c69bcc1a9c289457f56431605d861"} Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.142492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" event={"ID":"bd304649-1e47-4aca-ac7f-31ce823babbb","Type":"ContainerStarted","Data":"4a58cd1c17831392a29231e60f395242b8610e46751427630940537b5bc9752d"} Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.182657 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-77vgj"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.197026 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" event={"ID":"da8e6450-ebd3-47b0-9153-1deebe16432f","Type":"ContainerStarted","Data":"6f839cb572703cdbee44fd3617bf18216bd54984d90735b3d51a9bbcab1defc1"} Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.198893 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.209044 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.208523 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-q44mn" podStartSLOduration=60.208508134 podStartE2EDuration="1m0.208508134s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.207561383 +0000 UTC m=+121.681705727" watchObservedRunningTime="2026-03-08 00:25:00.208508134 +0000 UTC m=+121.682652478" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.217607 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.218787 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.718753418 +0000 UTC m=+122.192897762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.233838 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42916: no serving certificate available for the kubelet" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.237133 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.259798 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-84dbj" podStartSLOduration=60.259779847 podStartE2EDuration="1m0.259779847s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.258131435 +0000 UTC m=+121.732275779" watchObservedRunningTime="2026-03-08 00:25:00.259779847 +0000 UTC m=+121.733924191" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.297748 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.304639 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.325565 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.325908 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.825896102 +0000 UTC m=+122.300040446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.338099 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.396880 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn"] Mar 08 00:25:00 crc kubenswrapper[4762]: W0308 00:25:00.406656 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaf536ee8_823b_4496_b7b5_b9dee6b9c957.slice/crio-bf485cc3ec591f9cf119d02111a13b97e844ee1cb6c0edb4919e3b0bb69b8dcb WatchSource:0}: Error finding container bf485cc3ec591f9cf119d02111a13b97e844ee1cb6c0edb4919e3b0bb69b8dcb: Status 404 returned error can't find the container with id bf485cc3ec591f9cf119d02111a13b97e844ee1cb6c0edb4919e3b0bb69b8dcb Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.411723 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-kn22k" podStartSLOduration=59.41169507 podStartE2EDuration="59.41169507s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.389559459 +0000 UTC m=+121.863703803" watchObservedRunningTime="2026-03-08 00:25:00.41169507 +0000 UTC m=+121.885839404" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.426748 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.427216 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.427454 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:00.927439138 +0000 UTC m=+122.401583482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.432290 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" podStartSLOduration=59.432275712 podStartE2EDuration="59.432275712s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.429503874 +0000 UTC m=+121.903648218" watchObservedRunningTime="2026-03-08 00:25:00.432275712 +0000 UTC m=+121.906420056" Mar 08 00:25:00 crc kubenswrapper[4762]: W0308 00:25:00.484454 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod741e90e6_8de3_4054_94cf_7ada0da0e454.slice/crio-f5d4fbe82d38f25173bb879b5f7e715cd7b31a3847d2090d702cb2ed3e75eb37 WatchSource:0}: Error finding container f5d4fbe82d38f25173bb879b5f7e715cd7b31a3847d2090d702cb2ed3e75eb37: Status 404 returned error can't find the container with id f5d4fbe82d38f25173bb879b5f7e715cd7b31a3847d2090d702cb2ed3e75eb37 Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.498413 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-wltst" podStartSLOduration=60.498390016 podStartE2EDuration="1m0.498390016s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.467744755 +0000 UTC m=+121.941889099" watchObservedRunningTime="2026-03-08 00:25:00.498390016 +0000 UTC m=+121.972534360" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.510262 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-frbvk" podStartSLOduration=59.510240162 podStartE2EDuration="59.510240162s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.501607708 +0000 UTC m=+121.975752052" watchObservedRunningTime="2026-03-08 00:25:00.510240162 +0000 UTC m=+121.984384506" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.512984 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86"] Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.532726 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.533088 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.033075485 +0000 UTC m=+122.507219829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.547626 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gm49w" podStartSLOduration=59.547587965 podStartE2EDuration="59.547587965s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.545134227 +0000 UTC m=+122.019278571" watchObservedRunningTime="2026-03-08 00:25:00.547587965 +0000 UTC m=+122.021732309" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.598069 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-842sk" podStartSLOduration=60.598044344 podStartE2EDuration="1m0.598044344s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.589189263 +0000 UTC m=+122.063333607" watchObservedRunningTime="2026-03-08 00:25:00.598044344 +0000 UTC m=+122.072188678" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.640607 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" podStartSLOduration=59.64057669 podStartE2EDuration="59.64057669s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.640391684 +0000 UTC m=+122.114536028" watchObservedRunningTime="2026-03-08 00:25:00.64057669 +0000 UTC m=+122.114721034" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.641111 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.643453 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.14340519 +0000 UTC m=+122.617549534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.657151 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.657745 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.157729414 +0000 UTC m=+122.631873758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.665838 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42930: no serving certificate available for the kubelet" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.719842 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.727441 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" podStartSLOduration=59.727421622 podStartE2EDuration="59.727421622s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.726213744 +0000 UTC m=+122.200358078" watchObservedRunningTime="2026-03-08 00:25:00.727421622 +0000 UTC m=+122.201565966" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.727821 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hv69z" podStartSLOduration=59.727816054 podStartE2EDuration="59.727816054s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.690938616 +0000 UTC m=+122.165082960" watchObservedRunningTime="2026-03-08 00:25:00.727816054 +0000 UTC m=+122.201960398" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.745061 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:00 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:00 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:00 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.745119 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.758217 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.758316 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.2582955 +0000 UTC m=+122.732439844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.758482 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.758880 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.258868408 +0000 UTC m=+122.733012752 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.764323 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-c78f7" podStartSLOduration=59.76430839 podStartE2EDuration="59.76430839s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.763958709 +0000 UTC m=+122.238103053" watchObservedRunningTime="2026-03-08 00:25:00.76430839 +0000 UTC m=+122.238452734" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.779685 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.849747 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podStartSLOduration=60.849731067 podStartE2EDuration="1m0.849731067s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:00.848025952 +0000 UTC m=+122.322170296" watchObservedRunningTime="2026-03-08 00:25:00.849731067 +0000 UTC m=+122.323875411" Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.862679 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.862941 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.362926215 +0000 UTC m=+122.837070559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:00 crc kubenswrapper[4762]: I0308 00:25:00.966418 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:00 crc kubenswrapper[4762]: E0308 00:25:00.966724 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.466711882 +0000 UTC m=+122.940856226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.070170 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.070500 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.570472339 +0000 UTC m=+123.044616673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.070722 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.071044 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.571035288 +0000 UTC m=+123.045179632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.172342 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.172512 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.67248387 +0000 UTC m=+123.146628214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.172897 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.173259 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.673251755 +0000 UTC m=+123.147396099 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.185809 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" podStartSLOduration=61.185791373 podStartE2EDuration="1m1.185791373s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.134369883 +0000 UTC m=+122.608514227" watchObservedRunningTime="2026-03-08 00:25:01.185791373 +0000 UTC m=+122.659935707" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.187727 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" podStartSLOduration=61.187721904 podStartE2EDuration="1m1.187721904s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.176223479 +0000 UTC m=+122.650367823" watchObservedRunningTime="2026-03-08 00:25:01.187721904 +0000 UTC m=+122.661866248" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.261244 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8lcsp" event={"ID":"ec25fa7b-0753-4de2-8744-386eee28051a","Type":"ContainerStarted","Data":"e9d434a67a162dcff92ff23b7ec4bc32c5f910ce229f1baacfc64ad647495dff"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.276060 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.277348 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.777330002 +0000 UTC m=+123.251474356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.294391 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" event={"ID":"18f4f24d-5e64-4cbb-b6f2-59b836b022e0","Type":"ContainerStarted","Data":"303e7b9c3cdfaf14999ad9d8570eb56747e9cad4d1dbdb5ca4bf595053b39a7c"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.302301 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" event={"ID":"681404ff-89eb-420d-b1e2-6769d4b51636","Type":"ContainerStarted","Data":"59c812da05d96c3fd40e2ce82e7659bdf249330efc6613b4b9dbac4ffcd05094"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.320463 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" event={"ID":"04980224-fe82-485b-83f9-9c3d30b196db","Type":"ContainerStarted","Data":"f19feff2310b8297ec17d25b46434d009df9c54426a87ffa807fb19b5ff38722"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.320499 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" event={"ID":"04980224-fe82-485b-83f9-9c3d30b196db","Type":"ContainerStarted","Data":"81f69d888fbfb7116f8adbfb5c1f93dccd7e71a86600b844c5a275ca32a4d9d5"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.338148 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" event={"ID":"f9373f32-b3da-4942-83dd-490eb4d631fa","Type":"ContainerStarted","Data":"ac3612e8df3c547a9e5ad2b313da93fc6f8cce324553207341db11455368cf89"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.346195 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" event={"ID":"68ab86d6-f824-445d-b441-b7cbba73630b","Type":"ContainerStarted","Data":"86db17af8e64731b7c912cc36061830c50e3b9bd1de862abb662b00129f9725a"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.364836 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-kdmr2" podStartSLOduration=61.364816924 podStartE2EDuration="1m1.364816924s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.364408591 +0000 UTC m=+122.838552935" watchObservedRunningTime="2026-03-08 00:25:01.364816924 +0000 UTC m=+122.838961268" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.367523 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" event={"ID":"46ce5811-52d7-493a-a861-90d666c994ed","Type":"ContainerStarted","Data":"15192b0aeb562dba96c8d05cf80d377c1b7406181ce6fabf283695c5f1a3edc0"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.379405 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.381289 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.881272575 +0000 UTC m=+123.355416919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.415496 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42938: no serving certificate available for the kubelet" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.416194 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" event={"ID":"7a27cd53-cc43-4227-a15a-d55e0bfaf81d","Type":"ContainerStarted","Data":"4ad0e6773b6f050fa791a97c0d45e88d00db2bb3edd49f357d7eb328a375cb52"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.416236 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" event={"ID":"7a27cd53-cc43-4227-a15a-d55e0bfaf81d","Type":"ContainerStarted","Data":"bca72c6379c5f5f0a4467820a42d3e4606fdfcae0dd7a8434f296d1a612fa210"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.417599 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.465002 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.465559 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.482555 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.483923 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:01.983901456 +0000 UTC m=+123.458045800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.504715 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" podStartSLOduration=61.504679205 podStartE2EDuration="1m1.504679205s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.489330728 +0000 UTC m=+122.963475072" watchObservedRunningTime="2026-03-08 00:25:01.504679205 +0000 UTC m=+122.978823549" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.507136 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k687p" event={"ID":"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d","Type":"ContainerStarted","Data":"96fef50b1ad4b27bf5aa59b9b9285f78ae9c8591c999097e0aad8b8560455534"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.549009 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" event={"ID":"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13","Type":"ContainerStarted","Data":"4db697430a456d0d34707ba410eceb66ea86b9dde2dbefa53032666588767574"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.585932 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.589071 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xq6ct" event={"ID":"be598188-2db1-4d79-9d80-cc03d00fef50","Type":"ContainerStarted","Data":"62254414534cd2ae5980a12a54a8d285bbf6cd734a0cc1d1864494fa635b3395"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.589119 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-xq6ct" event={"ID":"be598188-2db1-4d79-9d80-cc03d00fef50","Type":"ContainerStarted","Data":"7146fb8b49aecf709ee66763729c58e01fa1d52748881e2bc4465b286a1ee2c3"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.595690 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8lcsp" podStartSLOduration=7.595676557 podStartE2EDuration="7.595676557s" podCreationTimestamp="2026-03-08 00:24:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.594515671 +0000 UTC m=+123.068660015" watchObservedRunningTime="2026-03-08 00:25:01.595676557 +0000 UTC m=+123.069820901" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.602759 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.102734091 +0000 UTC m=+123.576878425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.611301 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" event={"ID":"bd304649-1e47-4aca-ac7f-31ce823babbb","Type":"ContainerStarted","Data":"ace973af2bf593393b39a9ca4e79f506d41003e804e20ea2b331eadfd2d3d72a"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.629273 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" event={"ID":"4de5942e-acf8-4138-acc3-42c177a7f997","Type":"ContainerStarted","Data":"0783494b92d381ed19cb99cb973f9fb9739dc73207cbee44a7de35df863e8a06"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.631349 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.647659 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.647710 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.664130 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-7nw7k" podStartSLOduration=60.664080504 podStartE2EDuration="1m0.664080504s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.633826536 +0000 UTC m=+123.107970880" watchObservedRunningTime="2026-03-08 00:25:01.664080504 +0000 UTC m=+123.138224848" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.671042 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" event={"ID":"4e0bf995-5e45-4c39-8fbd-068691ed47bb","Type":"ContainerStarted","Data":"33ac1a5eb63dc471254f9fba0e502c8aa16ac9c1634b94f2aa0a9afe7d748b04"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.671112 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" event={"ID":"4e0bf995-5e45-4c39-8fbd-068691ed47bb","Type":"ContainerStarted","Data":"c1be6c3f50f7382018c94b32d08fe474242d2cf8bb3e646f81052b47901c777a"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.677954 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77vgj" event={"ID":"0a34caab-79f1-437e-be12-817faf4e8917","Type":"ContainerStarted","Data":"91b0d34c59365f1d6473f11ab4c2189ecbc3813c83917e7e9e28b0a25e15cff0"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.682158 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" event={"ID":"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb","Type":"ContainerStarted","Data":"73540a3eef66e2a08b1f6c205512c5eda7addb5db2280442077a13b9e33910e1"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.689534 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" event={"ID":"47863e3b-949c-40f1-bdb3-2d940b78cda0","Type":"ContainerStarted","Data":"30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.690049 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.700841 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.701693 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.201673904 +0000 UTC m=+123.675818238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.713693 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" event={"ID":"a1568d57-fbed-428e-9898-c3d5863be0a2","Type":"ContainerStarted","Data":"23671731e0576eca1ada2d8fd4d064ec06e650cf08d8b4f7b41ae58ee479c7fe"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.714166 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.721860 4762 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-jm2zf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.721912 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.723827 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nf2fd" podStartSLOduration=60.723815396 podStartE2EDuration="1m0.723815396s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.723184777 +0000 UTC m=+123.197329121" watchObservedRunningTime="2026-03-08 00:25:01.723815396 +0000 UTC m=+123.197959740" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.735910 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:01 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:01 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:01 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.735960 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.779946 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" event={"ID":"af536ee8-823b-4496-b7b5-b9dee6b9c957","Type":"ContainerStarted","Data":"bf485cc3ec591f9cf119d02111a13b97e844ee1cb6c0edb4919e3b0bb69b8dcb"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.795059 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" event={"ID":"a49b045c-2159-4fd1-b0de-fcf1453e6adb","Type":"ContainerStarted","Data":"72bad8a735f381ccc49a23bf9be99953f709aaa9942ba5119f6a28b25c2bb85b"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.804999 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.807024 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.307005972 +0000 UTC m=+123.781150316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.815416 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" podStartSLOduration=6.815396088 podStartE2EDuration="6.815396088s" podCreationTimestamp="2026-03-08 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.812402083 +0000 UTC m=+123.286546427" watchObservedRunningTime="2026-03-08 00:25:01.815396088 +0000 UTC m=+123.289540422" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.820034 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podStartSLOduration=60.820021064 podStartE2EDuration="1m0.820021064s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.759741465 +0000 UTC m=+123.233885809" watchObservedRunningTime="2026-03-08 00:25:01.820021064 +0000 UTC m=+123.294165408" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.823136 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" event={"ID":"9970d4db-1af9-4970-a930-c469cc02bf9f","Type":"ContainerStarted","Data":"6553bb78f383d3edbfa64a3229a9ff14dd18ea732c37b0d3ee27daff4ddf2c8c"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.832420 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" event={"ID":"1a713e9e-1823-4b89-a74c-922ed73cdd15","Type":"ContainerStarted","Data":"4926a7d8035f9ce8459cc3ac601d0181afa9f77cb8565388b3cc3e3c2dab3e9b"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.858394 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" podStartSLOduration=60.858379208 podStartE2EDuration="1m0.858379208s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.843434246 +0000 UTC m=+123.317578590" watchObservedRunningTime="2026-03-08 00:25:01.858379208 +0000 UTC m=+123.332523542" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.860813 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.876567 4762 generic.go:334] "Generic (PLEG): container finished" podID="a1b71198-134e-4cec-9f0b-b28979adf785" containerID="4a101b772220215fdc20dc7f85814b461775203d0c2a2405446f732c2a4212bd" exitCode=0 Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.876635 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" event={"ID":"a1b71198-134e-4cec-9f0b-b28979adf785","Type":"ContainerDied","Data":"4a101b772220215fdc20dc7f85814b461775203d0c2a2405446f732c2a4212bd"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.876663 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" event={"ID":"a1b71198-134e-4cec-9f0b-b28979adf785","Type":"ContainerStarted","Data":"fb874f499a6b285303ded18c794e84df0779fb0aba9ec0240e9a4826f30b76b7"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.890293 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" podStartSLOduration=60.890266739 podStartE2EDuration="1m0.890266739s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.887688267 +0000 UTC m=+123.361832611" watchObservedRunningTime="2026-03-08 00:25:01.890266739 +0000 UTC m=+123.364411083" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.916255 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.918313 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.918575 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.918905 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.919054 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.41903147 +0000 UTC m=+123.893175814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.919319 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:01 crc kubenswrapper[4762]: E0308 00:25:01.921008 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.420992782 +0000 UTC m=+123.895137126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.922069 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" event={"ID":"f160b607-f21e-4909-b6f5-a6756e9c6241","Type":"ContainerStarted","Data":"4803cfc22124dc6f5a52e5765ae4b19fe7a54fea33736a1e9b382a4c5fdd3bfc"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.936344 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" event={"ID":"da8e6450-ebd3-47b0-9153-1deebe16432f","Type":"ContainerStarted","Data":"7941624e6b36f0de9623c7b9367a7e3a3f9efb37c96d5c16b1f96bfe48ad0ced"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.936416 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" event={"ID":"da8e6450-ebd3-47b0-9153-1deebe16432f","Type":"ContainerStarted","Data":"59e5e5da8229209c0ea817972ba9690b0c329db2104d85c2c673a53374a81dcb"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.951690 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.952513 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vnmhm" podStartSLOduration=60.95249186 podStartE2EDuration="1m0.95249186s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.950554698 +0000 UTC m=+123.424699042" watchObservedRunningTime="2026-03-08 00:25:01.95249186 +0000 UTC m=+123.426636204" Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.965329 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29548800-x52sd" event={"ID":"fe7222be-b489-4bcb-bc44-0a8933cde1c5","Type":"ContainerStarted","Data":"9532b00b8ca740ffacae166f71c7c571f648951120f006293386757418b516f6"} Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.982880 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:25:01 crc kubenswrapper[4762]: I0308 00:25:01.995653 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podStartSLOduration=60.995628286 podStartE2EDuration="1m0.995628286s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:01.994202472 +0000 UTC m=+123.468346816" watchObservedRunningTime="2026-03-08 00:25:01.995628286 +0000 UTC m=+123.469772630" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.001639 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" event={"ID":"f5cbd39f-952e-4664-8994-4b2dd4162b25","Type":"ContainerStarted","Data":"f2b63f5ce333e14a3dac8e6045f94d70a14849b03a5f74efc3d68be15bf004ea"} Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.023087 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.030385 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.530359477 +0000 UTC m=+124.004503821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.051476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" event={"ID":"741e90e6-8de3-4054-94cf-7ada0da0e454","Type":"ContainerStarted","Data":"f5d4fbe82d38f25173bb879b5f7e715cd7b31a3847d2090d702cb2ed3e75eb37"} Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.051543 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.051646 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.051691 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.066553 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.068240 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-xq6ct" podStartSLOduration=7.068230316 podStartE2EDuration="7.068230316s" podCreationTimestamp="2026-03-08 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.06769967 +0000 UTC m=+123.541844014" watchObservedRunningTime="2026-03-08 00:25:02.068230316 +0000 UTC m=+123.542374660" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.083233 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.083286 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.089651 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.099966 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.124653 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.127344 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.627332059 +0000 UTC m=+124.101476403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.191502 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-zdv55" podStartSLOduration=62.19144813 podStartE2EDuration="1m2.19144813s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.120783052 +0000 UTC m=+123.594927396" watchObservedRunningTime="2026-03-08 00:25:02.19144813 +0000 UTC m=+123.665592464" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.194341 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-49qkg" podStartSLOduration=61.19430939 podStartE2EDuration="1m1.19430939s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.178417047 +0000 UTC m=+123.652561391" watchObservedRunningTime="2026-03-08 00:25:02.19430939 +0000 UTC m=+123.668453734" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.227275 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.245007 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.744972575 +0000 UTC m=+124.219116919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.245093 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.245476 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.745469391 +0000 UTC m=+124.219613735 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.316375 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29548800-x52sd" podStartSLOduration=62.316361227 podStartE2EDuration="1m2.316361227s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.247302579 +0000 UTC m=+123.721446923" watchObservedRunningTime="2026-03-08 00:25:02.316361227 +0000 UTC m=+123.790505571" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.349040 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.349439 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.849424464 +0000 UTC m=+124.323568808 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.452484 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.452887 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:02.952875392 +0000 UTC m=+124.427019736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.540322 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gppmf" podStartSLOduration=61.540290251 podStartE2EDuration="1m1.540290251s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.464094487 +0000 UTC m=+123.938238821" watchObservedRunningTime="2026-03-08 00:25:02.540290251 +0000 UTC m=+124.014434595" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.554270 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.554582 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.054566043 +0000 UTC m=+124.528710377 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.657266 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.658062 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.158047031 +0000 UTC m=+124.632191375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.732564 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podStartSLOduration=61.732542501 podStartE2EDuration="1m1.732542501s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.699712251 +0000 UTC m=+124.173856595" watchObservedRunningTime="2026-03-08 00:25:02.732542501 +0000 UTC m=+124.206686845" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.733422 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:02 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:02 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:02 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.733467 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.758699 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.758891 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.258862385 +0000 UTC m=+124.733006729 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.758953 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.759566 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.259558996 +0000 UTC m=+124.733703340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.816196 4762 ???:1] "http: TLS handshake error from 192.168.126.11:42942: no serving certificate available for the kubelet" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.848519 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2sqjk" podStartSLOduration=61.848498795 podStartE2EDuration="1m1.848498795s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.788420491 +0000 UTC m=+124.262564845" watchObservedRunningTime="2026-03-08 00:25:02.848498795 +0000 UTC m=+124.322643139" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.860588 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.861023 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.36100713 +0000 UTC m=+124.835151474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.891711 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-mgmkn" podStartSLOduration=61.891695942 podStartE2EDuration="1m1.891695942s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:02.848307768 +0000 UTC m=+124.322452112" watchObservedRunningTime="2026-03-08 00:25:02.891695942 +0000 UTC m=+124.365840286" Mar 08 00:25:02 crc kubenswrapper[4762]: I0308 00:25:02.968274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:02 crc kubenswrapper[4762]: E0308 00:25:02.968802 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.468786235 +0000 UTC m=+124.942930579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.069085 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.069445 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.569430363 +0000 UTC m=+125.043574707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.069840 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" event={"ID":"46ce5811-52d7-493a-a861-90d666c994ed","Type":"ContainerStarted","Data":"1d5bfc0c1d7c2f06f2095cf66ef4ab4841d0ae9dfc2e2c8636b63f0fef17885a"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.071573 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" event={"ID":"741e90e6-8de3-4054-94cf-7ada0da0e454","Type":"ContainerStarted","Data":"e13cb7aace6e15528746f589d4d72d35b027591ec9f937bb15e317f6a9fef24c"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.072496 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.072539 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.079187 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ghr8q"] Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.096447 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" event={"ID":"04980224-fe82-485b-83f9-9c3d30b196db","Type":"ContainerStarted","Data":"f9363b8fc8ac34db5baa8b6349034b057f9b994a6c568d3a4bc373a0a2ec92a9"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.097222 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.122701 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" event={"ID":"4e4e5ef1-02ce-403c-9903-0e9c734f8bdb","Type":"ContainerStarted","Data":"ac2f340c83af844dc0d8480bf44dc3abeada72bbf2787fec2447f15b6a190c84"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.130623 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" event={"ID":"68ab86d6-f824-445d-b441-b7cbba73630b","Type":"ContainerStarted","Data":"fe6eb200aaba961ed4faea6d8dd01483151cc72ccdb25db696bc14c520733e5a"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.135583 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" podStartSLOduration=63.135564698 podStartE2EDuration="1m3.135564698s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.122157203 +0000 UTC m=+124.596301547" watchObservedRunningTime="2026-03-08 00:25:03.135564698 +0000 UTC m=+124.609709042" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.147917 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-9g2xn" event={"ID":"cb0f35af-fe8c-4f8f-8674-f8f60f6ffa13","Type":"ContainerStarted","Data":"5418cf4f5f92775493f2afb4ce8b64afde8147f82a144676986faa49791409e2"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.160703 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podStartSLOduration=62.160689544 podStartE2EDuration="1m2.160689544s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.159124215 +0000 UTC m=+124.633268559" watchObservedRunningTime="2026-03-08 00:25:03.160689544 +0000 UTC m=+124.634833888" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.169544 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" event={"ID":"1a713e9e-1823-4b89-a74c-922ed73cdd15","Type":"ContainerStarted","Data":"98d25c0f2906ac53ea4fd21eff58374551b2437850d42cffe7712f5fec8ecd35"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.170263 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.173858 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.67383615 +0000 UTC m=+125.147980494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.176202 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" event={"ID":"a1b71198-134e-4cec-9f0b-b28979adf785","Type":"ContainerStarted","Data":"d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.176813 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.195272 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vplbt" podStartSLOduration=62.195255989 podStartE2EDuration="1m2.195255989s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.192923985 +0000 UTC m=+124.667068339" watchObservedRunningTime="2026-03-08 00:25:03.195255989 +0000 UTC m=+124.669400333" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.217957 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77vgj" event={"ID":"0a34caab-79f1-437e-be12-817faf4e8917","Type":"ContainerStarted","Data":"40ba8885e8795c4239cee8aa21566a30ff427b5def867c4398672cca4c1f88b9"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.218007 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-77vgj" event={"ID":"0a34caab-79f1-437e-be12-817faf4e8917","Type":"ContainerStarted","Data":"a0c9118c91935c3e4b9a318e8e20ddd44256c561ded8d94e1b73b639684214d0"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.218618 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-77vgj" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.248458 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fx8np" podStartSLOduration=62.248435264 podStartE2EDuration="1m2.248435264s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.24769176 +0000 UTC m=+124.721836114" watchObservedRunningTime="2026-03-08 00:25:03.248435264 +0000 UTC m=+124.722579608" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.260154 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" event={"ID":"9970d4db-1af9-4970-a930-c469cc02bf9f","Type":"ContainerStarted","Data":"4873b4e2258a9aed9d8547affa92e510cdb15c0ba5a95776c04d2dd83ad1f42e"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.272224 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.274181 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.774159539 +0000 UTC m=+125.248303883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.278445 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" event={"ID":"a1568d57-fbed-428e-9898-c3d5863be0a2","Type":"ContainerStarted","Data":"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.287147 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podStartSLOduration=63.28713308 podStartE2EDuration="1m3.28713308s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.286981745 +0000 UTC m=+124.761126089" watchObservedRunningTime="2026-03-08 00:25:03.28713308 +0000 UTC m=+124.761277424" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.302209 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k687p" event={"ID":"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d","Type":"ContainerStarted","Data":"bd30b79246c23662a85308821e2bcc069e1248a592ed2685910e29fc2a1210ad"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.320287 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mft86" podStartSLOduration=62.320262459 podStartE2EDuration="1m2.320262459s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.319823715 +0000 UTC m=+124.793968059" watchObservedRunningTime="2026-03-08 00:25:03.320262459 +0000 UTC m=+124.794406803" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.355526 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" event={"ID":"4de5942e-acf8-4138-acc3-42c177a7f997","Type":"ContainerStarted","Data":"140a56ad7d710cb74722b3bd1b443cb1f947abb10cd84309e956a26cad1aae5f"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.361004 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-77vgj" podStartSLOduration=8.36098886 podStartE2EDuration="8.36098886s" podCreationTimestamp="2026-03-08 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.360547675 +0000 UTC m=+124.834692019" watchObservedRunningTime="2026-03-08 00:25:03.36098886 +0000 UTC m=+124.835133204" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.373653 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.375665 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.875650233 +0000 UTC m=+125.349794577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.384254 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" event={"ID":"af536ee8-823b-4496-b7b5-b9dee6b9c957","Type":"ContainerStarted","Data":"bf7295cd147c13f219f9fdac9c931539decfb0098ed47f0db1b8d516fb7548d8"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.384298 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" event={"ID":"af536ee8-823b-4496-b7b5-b9dee6b9c957","Type":"ContainerStarted","Data":"b0dba820b3ceedfdb2ddfdf737ec83f9d3ccfe79942e2f2d008bffa657993883"} Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.384414 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" containerName="controller-manager" containerID="cri-o://73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051" gracePeriod=30 Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.399576 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.408709 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mns85" podStartSLOduration=62.408691191 podStartE2EDuration="1m2.408691191s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.399897702 +0000 UTC m=+124.874042046" watchObservedRunningTime="2026-03-08 00:25:03.408691191 +0000 UTC m=+124.882835545" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.479266 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.479394 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-wd6hs" podStartSLOduration=62.479377589 podStartE2EDuration="1m2.479377589s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:03.440077544 +0000 UTC m=+124.914221888" watchObservedRunningTime="2026-03-08 00:25:03.479377589 +0000 UTC m=+124.953521933" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.481127 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:03.981113235 +0000 UTC m=+125.455257579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.583379 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.583999 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.083988374 +0000 UTC m=+125.558132708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.685183 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.685342 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.185317324 +0000 UTC m=+125.659461668 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.685440 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.685796 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.185751348 +0000 UTC m=+125.659895692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.718775 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:03 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:03 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:03 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.718845 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.787192 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.787577 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.287561612 +0000 UTC m=+125.761705956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.806909 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.893307 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.893659 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.393645473 +0000 UTC m=+125.867789817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:03 crc kubenswrapper[4762]: I0308 00:25:03.994399 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:03 crc kubenswrapper[4762]: E0308 00:25:03.995141 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.495126288 +0000 UTC m=+125.969270632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.042537 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.052677 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096031 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config\") pod \"1d1f4801-1613-4369-8d06-d0345df9703a\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096162 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca\") pod \"1d1f4801-1613-4369-8d06-d0345df9703a\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096194 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles\") pod \"1d1f4801-1613-4369-8d06-d0345df9703a\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096359 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert\") pod \"1d1f4801-1613-4369-8d06-d0345df9703a\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096395 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnm4s\" (UniqueName: \"kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s\") pod \"1d1f4801-1613-4369-8d06-d0345df9703a\" (UID: \"1d1f4801-1613-4369-8d06-d0345df9703a\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.096556 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.096950 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.596934243 +0000 UTC m=+126.071078587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.097439 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d1f4801-1613-4369-8d06-d0345df9703a" (UID: "1d1f4801-1613-4369-8d06-d0345df9703a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.097533 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d1f4801-1613-4369-8d06-d0345df9703a" (UID: "1d1f4801-1613-4369-8d06-d0345df9703a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.097623 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config" (OuterVolumeSpecName: "config") pod "1d1f4801-1613-4369-8d06-d0345df9703a" (UID: "1d1f4801-1613-4369-8d06-d0345df9703a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.113832 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s" (OuterVolumeSpecName: "kube-api-access-bnm4s") pod "1d1f4801-1613-4369-8d06-d0345df9703a" (UID: "1d1f4801-1613-4369-8d06-d0345df9703a"). InnerVolumeSpecName "kube-api-access-bnm4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.121181 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d1f4801-1613-4369-8d06-d0345df9703a" (UID: "1d1f4801-1613-4369-8d06-d0345df9703a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201295 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201652 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201669 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201678 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d1f4801-1613-4369-8d06-d0345df9703a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201687 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d1f4801-1613-4369-8d06-d0345df9703a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.201696 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnm4s\" (UniqueName: \"kubernetes.io/projected/1d1f4801-1613-4369-8d06-d0345df9703a-kube-api-access-bnm4s\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.201787 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.701747883 +0000 UTC m=+126.175892217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.302436 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.303136 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.803119044 +0000 UTC m=+126.277263388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-n786p" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.395905 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k687p" event={"ID":"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d","Type":"ContainerStarted","Data":"3dbf4dfdc866d0649cbdcce34309ab24fa9467638033261f462ca8a7af02c609"} Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.395949 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k687p" event={"ID":"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d","Type":"ContainerStarted","Data":"948e8cf9b8eb8bc92a90a9dd86ebb044f20ed91f23f723563e0fbd9f2dbfcedb"} Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.399531 4762 generic.go:334] "Generic (PLEG): container finished" podID="1d1f4801-1613-4369-8d06-d0345df9703a" containerID="73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051" exitCode=0 Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.399610 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.399670 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" event={"ID":"1d1f4801-1613-4369-8d06-d0345df9703a","Type":"ContainerDied","Data":"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051"} Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.399703 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-vw2gn" event={"ID":"1d1f4801-1613-4369-8d06-d0345df9703a","Type":"ContainerDied","Data":"643590ddda79e87c2dcfda3f15ea58940ddf7d937d4a63bdb0493273641fcdc8"} Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.399734 4762 scope.go:117] "RemoveContainer" containerID="73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.400003 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerName="route-controller-manager" containerID="cri-o://13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d" gracePeriod=30 Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.402282 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" gracePeriod=30 Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.404642 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.404897 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.405677 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-08 00:25:04.905648632 +0000 UTC m=+126.379792976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.416315 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a6d5f4b4-a877-45da-9fed-81885011430f-metrics-certs\") pod \"network-metrics-daemon-gdnwf\" (UID: \"a6d5f4b4-a877-45da-9fed-81885011430f\") " pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.422010 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.463347 4762 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.473106 4762 scope.go:117] "RemoveContainer" containerID="73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.479586 4762 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-08T00:25:04.463382531Z","Handler":null,"Name":""} Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.479983 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051\": container with ID starting with 73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051 not found: ID does not exist" containerID="73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.480028 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051"} err="failed to get container status \"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051\": rpc error: code = NotFound desc = could not find container \"73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051\": container with ID starting with 73475feaa0b6745c7731b9aa81b2a818c1e9e6b97ac5ef799e30b7d2be889051 not found: ID does not exist" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.487553 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.495034 4762 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.495083 4762 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.496523 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-vw2gn"] Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.508810 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.528682 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.528725 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.563554 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.581826 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gdnwf" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.597997 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-n786p\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.614330 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.630541 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.652177 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.724983 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:04 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:04 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:04 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.725038 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.803770 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.808357 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.811561 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:25:04 crc kubenswrapper[4762]: E0308 00:25:04.826990 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" containerName="controller-manager" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.827036 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" containerName="controller-manager" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.827237 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" containerName="controller-manager" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.827955 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.832366 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.837473 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.925457 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8s6q\" (UniqueName: \"kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.926015 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.926104 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.933206 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gdnwf"] Mar 08 00:25:04 crc kubenswrapper[4762]: I0308 00:25:04.947827 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027036 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78fn5\" (UniqueName: \"kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5\") pod \"a1568d57-fbed-428e-9898-c3d5863be0a2\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027097 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config\") pod \"a1568d57-fbed-428e-9898-c3d5863be0a2\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027131 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca\") pod \"a1568d57-fbed-428e-9898-c3d5863be0a2\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027178 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert\") pod \"a1568d57-fbed-428e-9898-c3d5863be0a2\" (UID: \"a1568d57-fbed-428e-9898-c3d5863be0a2\") " Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027331 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027377 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.027494 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8s6q\" (UniqueName: \"kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.028906 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "a1568d57-fbed-428e-9898-c3d5863be0a2" (UID: "a1568d57-fbed-428e-9898-c3d5863be0a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.031711 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config" (OuterVolumeSpecName: "config") pod "a1568d57-fbed-428e-9898-c3d5863be0a2" (UID: "a1568d57-fbed-428e-9898-c3d5863be0a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.031998 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.032045 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.038712 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a1568d57-fbed-428e-9898-c3d5863be0a2" (UID: "a1568d57-fbed-428e-9898-c3d5863be0a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.041904 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5" (OuterVolumeSpecName: "kube-api-access-78fn5") pod "a1568d57-fbed-428e-9898-c3d5863be0a2" (UID: "a1568d57-fbed-428e-9898-c3d5863be0a2"). InnerVolumeSpecName "kube-api-access-78fn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.052020 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8s6q\" (UniqueName: \"kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q\") pod \"community-operators-dcsdg\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.088678 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.129072 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.129107 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a1568d57-fbed-428e-9898-c3d5863be0a2-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.129116 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1568d57-fbed-428e-9898-c3d5863be0a2-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.129125 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78fn5\" (UniqueName: \"kubernetes.io/projected/a1568d57-fbed-428e-9898-c3d5863be0a2-kube-api-access-78fn5\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.184567 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.203191 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:25:05 crc kubenswrapper[4762]: E0308 00:25:05.203377 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerName="route-controller-manager" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.203389 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerName="route-controller-manager" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.203474 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerName="route-controller-manager" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.204130 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.225723 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.232744 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.232932 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.232983 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvj22\" (UniqueName: \"kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.290697 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1f4801-1613-4369-8d06-d0345df9703a" path="/var/lib/kubelet/pods/1d1f4801-1613-4369-8d06-d0345df9703a/volumes" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.291381 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.333987 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.335175 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.334037 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvj22\" (UniqueName: \"kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.335280 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.335595 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.357641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvj22\" (UniqueName: \"kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22\") pod \"community-operators-x7tm2\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.370836 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.371538 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.374252 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.374950 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.377612 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.377819 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.379168 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.379184 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.379273 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.379383 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.380345 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.396842 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.404684 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.407887 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.408887 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.418142 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.430834 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436129 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcc5q\" (UniqueName: \"kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436195 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkjv9\" (UniqueName: \"kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436223 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436255 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436290 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bgn\" (UniqueName: \"kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436315 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436342 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436360 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436383 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436450 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436479 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.436506 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.439958 4762 generic.go:334] "Generic (PLEG): container finished" podID="a1568d57-fbed-428e-9898-c3d5863be0a2" containerID="13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d" exitCode=0 Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.440027 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" event={"ID":"a1568d57-fbed-428e-9898-c3d5863be0a2","Type":"ContainerDied","Data":"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.440055 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" event={"ID":"a1568d57-fbed-428e-9898-c3d5863be0a2","Type":"ContainerDied","Data":"23671731e0576eca1ada2d8fd4d064ec06e650cf08d8b4f7b41ae58ee479c7fe"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.440073 4762 scope.go:117] "RemoveContainer" containerID="13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.440268 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.459114 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-k687p" event={"ID":"b2dce5bf-2a64-44af-bfe2-0a15fd5d357d","Type":"ContainerStarted","Data":"cfddb3a4ddb8d9ce259ca7e0f1dba62d85f754af73f831442c6c370d12a992e6"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.467483 4762 ???:1] "http: TLS handshake error from 192.168.126.11:37958: no serving certificate available for the kubelet" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.468601 4762 scope.go:117] "RemoveContainer" containerID="13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d" Mar 08 00:25:05 crc kubenswrapper[4762]: E0308 00:25:05.474017 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d\": container with ID starting with 13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d not found: ID does not exist" containerID="13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.474068 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d"} err="failed to get container status \"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d\": rpc error: code = NotFound desc = could not find container \"13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d\": container with ID starting with 13c662e3f20f96580bb1d13070716be0616b3589465928b040dd22d50ca2044d not found: ID does not exist" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.494533 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" event={"ID":"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f","Type":"ContainerStarted","Data":"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.494581 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" event={"ID":"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f","Type":"ContainerStarted","Data":"22780104e914ff35adf752aaca2d1d52e58a49805290ab77e68adb90b75baac0"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.495227 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.496874 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gdnwf" event={"ID":"a6d5f4b4-a877-45da-9fed-81885011430f","Type":"ContainerStarted","Data":"ee49ab91f1edd835f38d065b3ad1dea6a431e6e9c073fc445ac7189decab6fed"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.496898 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gdnwf" event={"ID":"a6d5f4b4-a877-45da-9fed-81885011430f","Type":"ContainerStarted","Data":"5a577cd18533059890762ecd3c6c8dd32ee9675cd4339008958a2d37fdc2ad48"} Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.517620 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podStartSLOduration=10.517590627 podStartE2EDuration="10.517590627s" podCreationTimestamp="2026-03-08 00:24:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:05.513962003 +0000 UTC m=+126.988106367" watchObservedRunningTime="2026-03-08 00:25:05.517590627 +0000 UTC m=+126.991734971" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.520997 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.526240 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.531651 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540601 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcc5q\" (UniqueName: \"kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540718 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540741 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkjv9\" (UniqueName: \"kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540838 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540923 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bgn\" (UniqueName: \"kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540951 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.540980 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.541017 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.541066 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.541118 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.541149 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.541179 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.543479 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.544545 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.544646 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.545597 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.546087 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.546388 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-jm2zf"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.546438 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.547321 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.560399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.561655 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.568636 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcc5q\" (UniqueName: \"kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q\") pod \"certified-operators-qv7hs\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.572423 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bgn\" (UniqueName: \"kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn\") pod \"route-controller-manager-749b5bd996-kplzg\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.573801 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkjv9\" (UniqueName: \"kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9\") pod \"controller-manager-7cfb5dcc4f-rv9sv\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.573805 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" podStartSLOduration=64.573782947 podStartE2EDuration="1m4.573782947s" podCreationTimestamp="2026-03-08 00:24:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:05.571585808 +0000 UTC m=+127.045730162" watchObservedRunningTime="2026-03-08 00:25:05.573782947 +0000 UTC m=+127.047927291" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.601023 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.604698 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.627548 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.643498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gkqv\" (UniqueName: \"kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.643543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.644212 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.693980 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.710204 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.714474 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.723132 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:05 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:05 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:05 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.723180 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.742142 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.755788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gkqv\" (UniqueName: \"kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.755842 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.755862 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.756388 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.757086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.781593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gkqv\" (UniqueName: \"kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv\") pod \"certified-operators-7jhpd\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:05 crc kubenswrapper[4762]: I0308 00:25:05.966448 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.026091 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.028067 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:06 crc kubenswrapper[4762]: W0308 00:25:06.045207 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04273ede_bf68_404e_af9a_93340dd6ed77.slice/crio-4c64c0d2576d72575970d182f47ed678591e85f7d9357b3d01bb07e983904836 WatchSource:0}: Error finding container 4c64c0d2576d72575970d182f47ed678591e85f7d9357b3d01bb07e983904836: Status 404 returned error can't find the container with id 4c64c0d2576d72575970d182f47ed678591e85f7d9357b3d01bb07e983904836 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.211441 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.276428 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.283137 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.504352 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" event={"ID":"04273ede-bf68-404e-af9a-93340dd6ed77","Type":"ContainerStarted","Data":"2442767b04775260174e609123089318b83f861e15cb9d47bc112e1e89993b13"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.504846 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.504863 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" event={"ID":"04273ede-bf68-404e-af9a-93340dd6ed77","Type":"ContainerStarted","Data":"4c64c0d2576d72575970d182f47ed678591e85f7d9357b3d01bb07e983904836"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.507747 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gdnwf" event={"ID":"a6d5f4b4-a877-45da-9fed-81885011430f","Type":"ContainerStarted","Data":"a77c6c18329b42673b6e86d5ac5914f77bd8d1411f0207d9b3819b50d090c0d6"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.515258 4762 generic.go:334] "Generic (PLEG): container finished" podID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerID="bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13" exitCode=0 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.515414 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerDied","Data":"bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.515494 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerStarted","Data":"7b6cf5dad20512ca4374e24fa243c257373a5f582e69e8a739248c71206a82ea"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.517475 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.521321 4762 generic.go:334] "Generic (PLEG): container finished" podID="e304c866-47f5-4a38-b530-b816ec5e685a" containerID="f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2" exitCode=0 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.521547 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerDied","Data":"f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.521598 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerStarted","Data":"eb68046d04bc9bde7f77e47f4ed7107d6970de2c09efba3f4d6cd9ca274f7f93"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.530770 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" podStartSLOduration=3.530734522 podStartE2EDuration="3.530734522s" podCreationTimestamp="2026-03-08 00:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:06.528362917 +0000 UTC m=+128.002507261" watchObservedRunningTime="2026-03-08 00:25:06.530734522 +0000 UTC m=+128.004878866" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.538905 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerID="6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f" exitCode=0 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.538994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerDied","Data":"6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.539028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerStarted","Data":"b5a4dc08aadd4dc756fd5a63203b83065f052e5a9b57cd269de436abbc056faa"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.543820 4762 generic.go:334] "Generic (PLEG): container finished" podID="681404ff-89eb-420d-b1e2-6769d4b51636" containerID="59c812da05d96c3fd40e2ce82e7659bdf249330efc6613b4b9dbac4ffcd05094" exitCode=0 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.543964 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" event={"ID":"681404ff-89eb-420d-b1e2-6769d4b51636","Type":"ContainerDied","Data":"59c812da05d96c3fd40e2ce82e7659bdf249330efc6613b4b9dbac4ffcd05094"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.545445 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gdnwf" podStartSLOduration=66.545434367 podStartE2EDuration="1m6.545434367s" podCreationTimestamp="2026-03-08 00:24:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:06.543879369 +0000 UTC m=+128.018023713" watchObservedRunningTime="2026-03-08 00:25:06.545434367 +0000 UTC m=+128.019578711" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.549983 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" event={"ID":"6522ba1b-d390-4b3a-b825-00f66d60a0e9","Type":"ContainerStarted","Data":"c8a17a49e9db4c5cf2b96c0063582987e9ac093845352682c8852165c30df648"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.550023 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" event={"ID":"6522ba1b-d390-4b3a-b825-00f66d60a0e9","Type":"ContainerStarted","Data":"df1992736903297094b3b0cc35e377165e94c9a52b4a884b8ce407a066260fb3"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.550452 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.552623 4762 patch_prober.go:28] interesting pod/controller-manager-7cfb5dcc4f-rv9sv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.552658 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.555092 4762 generic.go:334] "Generic (PLEG): container finished" podID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerID="06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53" exitCode=0 Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.555243 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerDied","Data":"06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.555310 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerStarted","Data":"2c53242d8df303527b26738a590752b3e98d4e5d33daad64a5d706dfca381072"} Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.602869 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.656459 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" podStartSLOduration=3.656434793 podStartE2EDuration="3.656434793s" podCreationTimestamp="2026-03-08 00:25:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:06.650474076 +0000 UTC m=+128.124618420" watchObservedRunningTime="2026-03-08 00:25:06.656434793 +0000 UTC m=+128.130579137" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.723460 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:06 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:06 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:06 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.723826 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.728272 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.728328 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.728273 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.728382 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.993505 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.993964 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.995929 4762 patch_prober.go:28] interesting pod/console-f9d7485db-842sk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 08 00:25:06 crc kubenswrapper[4762]: I0308 00:25:06.996076 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-842sk" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.005420 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.006395 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.015157 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.082198 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.082275 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.082420 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqll\" (UniqueName: \"kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.094052 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.183788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqll\" (UniqueName: \"kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.183868 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.183918 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.184548 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.184637 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.239601 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqll\" (UniqueName: \"kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll\") pod \"redhat-marketplace-wc85x\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.275382 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1568d57-fbed-428e-9898-c3d5863be0a2" path="/var/lib/kubelet/pods/a1568d57-fbed-428e-9898-c3d5863be0a2/volumes" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.317626 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.317696 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.319689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.325886 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.415605 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.416574 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.424466 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.487980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62np\" (UniqueName: \"kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.488070 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.488135 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.589967 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.590610 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.590727 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.590798 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h62np\" (UniqueName: \"kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.591070 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jthcx" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.591327 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.615293 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.627587 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h62np\" (UniqueName: \"kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np\") pod \"redhat-marketplace-k8bhh\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.717922 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.723020 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:07 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:07 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:07 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.723086 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:07 crc kubenswrapper[4762]: I0308 00:25:07.740398 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.041138 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.049164 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:25:08 crc kubenswrapper[4762]: W0308 00:25:08.060354 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ac2172_da6d_436b_8cde_593837d65920.slice/crio-147ba06789405ddc0730aab97de66e98d94bafdaf1a1a0b8939a030049a60d44 WatchSource:0}: Error finding container 147ba06789405ddc0730aab97de66e98d94bafdaf1a1a0b8939a030049a60d44: Status 404 returned error can't find the container with id 147ba06789405ddc0730aab97de66e98d94bafdaf1a1a0b8939a030049a60d44 Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.212354 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume\") pod \"681404ff-89eb-420d-b1e2-6769d4b51636\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.212470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv4ch\" (UniqueName: \"kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch\") pod \"681404ff-89eb-420d-b1e2-6769d4b51636\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.212526 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume\") pod \"681404ff-89eb-420d-b1e2-6769d4b51636\" (UID: \"681404ff-89eb-420d-b1e2-6769d4b51636\") " Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.214075 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume" (OuterVolumeSpecName: "config-volume") pod "681404ff-89eb-420d-b1e2-6769d4b51636" (UID: "681404ff-89eb-420d-b1e2-6769d4b51636"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.224519 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch" (OuterVolumeSpecName: "kube-api-access-lv4ch") pod "681404ff-89eb-420d-b1e2-6769d4b51636" (UID: "681404ff-89eb-420d-b1e2-6769d4b51636"). InnerVolumeSpecName "kube-api-access-lv4ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.227264 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "681404ff-89eb-420d-b1e2-6769d4b51636" (UID: "681404ff-89eb-420d-b1e2-6769d4b51636"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.240408 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:25:08 crc kubenswrapper[4762]: W0308 00:25:08.283779 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f99b9f4_d6bf_4cc3_98d5_e30c34c7295b.slice/crio-8ebcf030a6291365d0b5a4b18b4f695fceb60bde46a2ea6fb819c695cb25a4d5 WatchSource:0}: Error finding container 8ebcf030a6291365d0b5a4b18b4f695fceb60bde46a2ea6fb819c695cb25a4d5: Status 404 returned error can't find the container with id 8ebcf030a6291365d0b5a4b18b4f695fceb60bde46a2ea6fb819c695cb25a4d5 Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.314192 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv4ch\" (UniqueName: \"kubernetes.io/projected/681404ff-89eb-420d-b1e2-6769d4b51636-kube-api-access-lv4ch\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.314223 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/681404ff-89eb-420d-b1e2-6769d4b51636-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.314235 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/681404ff-89eb-420d-b1e2-6769d4b51636-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.402992 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:25:08 crc kubenswrapper[4762]: E0308 00:25:08.403212 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="681404ff-89eb-420d-b1e2-6769d4b51636" containerName="collect-profiles" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.403225 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="681404ff-89eb-420d-b1e2-6769d4b51636" containerName="collect-profiles" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.403315 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="681404ff-89eb-420d-b1e2-6769d4b51636" containerName="collect-profiles" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.404064 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.407018 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.415324 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.415560 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.415627 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5lb\" (UniqueName: \"kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.415818 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.516986 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.517065 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.517096 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn5lb\" (UniqueName: \"kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.517698 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.519146 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.535138 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn5lb\" (UniqueName: \"kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb\") pod \"redhat-operators-l4c9k\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: E0308 00:25:08.551323 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:08 crc kubenswrapper[4762]: E0308 00:25:08.553417 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:08 crc kubenswrapper[4762]: E0308 00:25:08.555095 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:08 crc kubenswrapper[4762]: E0308 00:25:08.555140 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.589625 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" event={"ID":"681404ff-89eb-420d-b1e2-6769d4b51636","Type":"ContainerDied","Data":"d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063"} Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.590061 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50d43658c612dfc77675758a305e04657563b82e5fbb4c8232544d18070c063" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.590114 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.597632 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerStarted","Data":"147ba06789405ddc0730aab97de66e98d94bafdaf1a1a0b8939a030049a60d44"} Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.600072 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerStarted","Data":"8ebcf030a6291365d0b5a4b18b4f695fceb60bde46a2ea6fb819c695cb25a4d5"} Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.724879 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:08 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:08 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:08 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.724955 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.731144 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.735268 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.736226 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.744237 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.744488 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.747438 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.817481 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.818635 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.822680 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.822735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfxtk\" (UniqueName: \"kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.822899 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.822936 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.823001 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.824654 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924214 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924271 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924320 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfxtk\" (UniqueName: \"kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924402 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924429 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.924512 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.925119 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.925175 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.949093 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:08 crc kubenswrapper[4762]: I0308 00:25:08.959801 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfxtk\" (UniqueName: \"kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk\") pod \"redhat-operators-g7kzd\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.137933 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.154857 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.157170 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:25:09 crc kubenswrapper[4762]: W0308 00:25:09.234213 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30000013_c882_4eaa_a7f0_fc380ef4f09c.slice/crio-fc2f74775b5368a3707dd41f9fae880938446105bb15b0a682dfa6f19f78f319 WatchSource:0}: Error finding container fc2f74775b5368a3707dd41f9fae880938446105bb15b0a682dfa6f19f78f319: Status 404 returned error can't find the container with id fc2f74775b5368a3707dd41f9fae880938446105bb15b0a682dfa6f19f78f319 Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.629733 4762 generic.go:334] "Generic (PLEG): container finished" podID="63ac2172-da6d-436b-8cde-593837d65920" containerID="1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279" exitCode=0 Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.630146 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerDied","Data":"1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279"} Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.635947 4762 generic.go:334] "Generic (PLEG): container finished" podID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerID="5cd9084f379c29feb9598638daabdff9a8d3ed92e4c7a4d44b93643a8b039d86" exitCode=0 Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.635995 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerDied","Data":"5cd9084f379c29feb9598638daabdff9a8d3ed92e4c7a4d44b93643a8b039d86"} Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.638693 4762 generic.go:334] "Generic (PLEG): container finished" podID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerID="cc98301ee742ffeee9934eab6b331bfca942453110de48d5f75cfb367aaab0b3" exitCode=0 Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.638718 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerDied","Data":"cc98301ee742ffeee9934eab6b331bfca942453110de48d5f75cfb367aaab0b3"} Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.638732 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerStarted","Data":"fc2f74775b5368a3707dd41f9fae880938446105bb15b0a682dfa6f19f78f319"} Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.646883 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.720132 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:09 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:09 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:09 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.720203 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:09 crc kubenswrapper[4762]: I0308 00:25:09.843533 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:25:09 crc kubenswrapper[4762]: W0308 00:25:09.870440 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b14b4da_20cb_4559_9d3b_007f0f76ae72.slice/crio-134802603ff4abacd7b220d039cc531ebab9e8e4738ce478483eaf61ec066059 WatchSource:0}: Error finding container 134802603ff4abacd7b220d039cc531ebab9e8e4738ce478483eaf61ec066059: Status 404 returned error can't find the container with id 134802603ff4abacd7b220d039cc531ebab9e8e4738ce478483eaf61ec066059 Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.210913 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.215918 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.218378 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.220749 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.226456 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.251437 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.251492 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.312955 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.353308 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.353488 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.354036 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.384067 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.581600 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.619623 4762 ???:1] "http: TLS handshake error from 192.168.126.11:37960: no serving certificate available for the kubelet" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.702225 4762 generic.go:334] "Generic (PLEG): container finished" podID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerID="97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb" exitCode=0 Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.702401 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerDied","Data":"97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb"} Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.702454 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerStarted","Data":"134802603ff4abacd7b220d039cc531ebab9e8e4738ce478483eaf61ec066059"} Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.719515 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:10 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:10 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:10 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.719633 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.733648 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.733623468 podStartE2EDuration="733.623468ms" podCreationTimestamp="2026-03-08 00:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:10.725523399 +0000 UTC m=+132.199667733" watchObservedRunningTime="2026-03-08 00:25:10.733623468 +0000 UTC m=+132.207767812" Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.740111 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"19e99834-40a7-4400-9ce9-c624e1407052","Type":"ContainerStarted","Data":"e1208b1e16031e7a9242ac517629db8543e7fe3c15d9c320e1fa40b8becb0b96"} Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.740178 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"19e99834-40a7-4400-9ce9-c624e1407052","Type":"ContainerStarted","Data":"f1eec21557c91f1ae1f2564059e8a229ffc6c7c1263c3c44495fff88e126fb15"} Mar 08 00:25:10 crc kubenswrapper[4762]: I0308 00:25:10.770114 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.770096453 podStartE2EDuration="2.770096453s" podCreationTimestamp="2026-03-08 00:25:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:10.768230152 +0000 UTC m=+132.242374496" watchObservedRunningTime="2026-03-08 00:25:10.770096453 +0000 UTC m=+132.244240797" Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.215305 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 08 00:25:11 crc kubenswrapper[4762]: W0308 00:25:11.270150 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6aaff245_2b04_45a1_8304_0ea7b9d90d3c.slice/crio-0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb WatchSource:0}: Error finding container 0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb: Status 404 returned error can't find the container with id 0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.656313 4762 ???:1] "http: TLS handshake error from 192.168.126.11:37976: no serving certificate available for the kubelet" Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.719082 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:11 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:11 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:11 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.719614 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.764639 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aaff245-2b04-45a1-8304-0ea7b9d90d3c","Type":"ContainerStarted","Data":"0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb"} Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.767731 4762 generic.go:334] "Generic (PLEG): container finished" podID="19e99834-40a7-4400-9ce9-c624e1407052" containerID="e1208b1e16031e7a9242ac517629db8543e7fe3c15d9c320e1fa40b8becb0b96" exitCode=0 Mar 08 00:25:11 crc kubenswrapper[4762]: I0308 00:25:11.767807 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"19e99834-40a7-4400-9ce9-c624e1407052","Type":"ContainerDied","Data":"e1208b1e16031e7a9242ac517629db8543e7fe3c15d9c320e1fa40b8becb0b96"} Mar 08 00:25:12 crc kubenswrapper[4762]: I0308 00:25:12.719423 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:12 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:12 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:12 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:12 crc kubenswrapper[4762]: I0308 00:25:12.719528 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:12 crc kubenswrapper[4762]: I0308 00:25:12.814157 4762 generic.go:334] "Generic (PLEG): container finished" podID="6aaff245-2b04-45a1-8304-0ea7b9d90d3c" containerID="c8ab5ac8ff380f755bfb4b04fd0a0426cb9fa251dc3171d2cc7a2e89ec106566" exitCode=0 Mar 08 00:25:12 crc kubenswrapper[4762]: I0308 00:25:12.814259 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aaff245-2b04-45a1-8304-0ea7b9d90d3c","Type":"ContainerDied","Data":"c8ab5ac8ff380f755bfb4b04fd0a0426cb9fa251dc3171d2cc7a2e89ec106566"} Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.232207 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.284330 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.539377 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-77vgj" Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.558315 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=0.558295531 podStartE2EDuration="558.295531ms" podCreationTimestamp="2026-03-08 00:25:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:13.554358101 +0000 UTC m=+135.028502465" watchObservedRunningTime="2026-03-08 00:25:13.558295531 +0000 UTC m=+135.032439875" Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.719024 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:13 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:13 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:13 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:13 crc kubenswrapper[4762]: I0308 00:25:13.719146 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:14 crc kubenswrapper[4762]: I0308 00:25:14.718931 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:14 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:14 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:14 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:14 crc kubenswrapper[4762]: I0308 00:25:14.719464 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:15 crc kubenswrapper[4762]: I0308 00:25:15.718849 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:15 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:15 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:15 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:15 crc kubenswrapper[4762]: I0308 00:25:15.718910 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:16 crc kubenswrapper[4762]: I0308 00:25:16.718425 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:16 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:16 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:16 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:16 crc kubenswrapper[4762]: I0308 00:25:16.718907 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:16 crc kubenswrapper[4762]: I0308 00:25:16.743217 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 00:25:16 crc kubenswrapper[4762]: I0308 00:25:16.994967 4762 patch_prober.go:28] interesting pod/console-f9d7485db-842sk container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 08 00:25:16 crc kubenswrapper[4762]: I0308 00:25:16.996915 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-842sk" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 08 00:25:17 crc kubenswrapper[4762]: I0308 00:25:17.718506 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 08 00:25:17 crc kubenswrapper[4762]: [-]has-synced failed: reason withheld Mar 08 00:25:17 crc kubenswrapper[4762]: [+]process-running ok Mar 08 00:25:17 crc kubenswrapper[4762]: healthz check failed Mar 08 00:25:17 crc kubenswrapper[4762]: I0308 00:25:17.718551 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 00:25:18 crc kubenswrapper[4762]: E0308 00:25:18.560537 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:18 crc kubenswrapper[4762]: E0308 00:25:18.563314 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:18 crc kubenswrapper[4762]: E0308 00:25:18.565400 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:18 crc kubenswrapper[4762]: E0308 00:25:18.565462 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:18 crc kubenswrapper[4762]: I0308 00:25:18.787826 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:25:18 crc kubenswrapper[4762]: I0308 00:25:18.793012 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.141955 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.142026 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.142092 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.142156 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.144966 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.145176 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.145244 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.154729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.155080 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.168076 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.168298 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.173566 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.275778 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.282832 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.454542 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.897979 4762 ???:1] "http: TLS handshake error from 192.168.126.11:33520: no serving certificate available for the kubelet" Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.914636 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.917587 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" containerID="cri-o://c8a17a49e9db4c5cf2b96c0063582987e9ac093845352682c8852165c30df648" gracePeriod=30 Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.935324 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:20 crc kubenswrapper[4762]: I0308 00:25:20.935679 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" containerID="cri-o://2442767b04775260174e609123089318b83f861e15cb9d47bc112e1e89993b13" gracePeriod=30 Mar 08 00:25:22 crc kubenswrapper[4762]: I0308 00:25:22.152145 4762 generic.go:334] "Generic (PLEG): container finished" podID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerID="c8a17a49e9db4c5cf2b96c0063582987e9ac093845352682c8852165c30df648" exitCode=0 Mar 08 00:25:22 crc kubenswrapper[4762]: I0308 00:25:22.152222 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" event={"ID":"6522ba1b-d390-4b3a-b825-00f66d60a0e9","Type":"ContainerDied","Data":"c8a17a49e9db4c5cf2b96c0063582987e9ac093845352682c8852165c30df648"} Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.280445 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.530345 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.585307 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=0.585278084 podStartE2EDuration="585.278084ms" podCreationTimestamp="2026-03-08 00:25:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:23.582702155 +0000 UTC m=+145.056846509" watchObservedRunningTime="2026-03-08 00:25:23.585278084 +0000 UTC m=+145.059422428" Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.612430 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir\") pod \"19e99834-40a7-4400-9ce9-c624e1407052\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.612562 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access\") pod \"19e99834-40a7-4400-9ce9-c624e1407052\" (UID: \"19e99834-40a7-4400-9ce9-c624e1407052\") " Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.612634 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "19e99834-40a7-4400-9ce9-c624e1407052" (UID: "19e99834-40a7-4400-9ce9-c624e1407052"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.613903 4762 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19e99834-40a7-4400-9ce9-c624e1407052-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.630286 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "19e99834-40a7-4400-9ce9-c624e1407052" (UID: "19e99834-40a7-4400-9ce9-c624e1407052"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:23 crc kubenswrapper[4762]: I0308 00:25:23.715178 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19e99834-40a7-4400-9ce9-c624e1407052-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.168743 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"19e99834-40a7-4400-9ce9-c624e1407052","Type":"ContainerDied","Data":"f1eec21557c91f1ae1f2564059e8a229ffc6c7c1263c3c44495fff88e126fb15"} Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.168824 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1eec21557c91f1ae1f2564059e8a229ffc6c7c1263c3c44495fff88e126fb15" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.168838 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.170829 4762 generic.go:334] "Generic (PLEG): container finished" podID="04273ede-bf68-404e-af9a-93340dd6ed77" containerID="2442767b04775260174e609123089318b83f861e15cb9d47bc112e1e89993b13" exitCode=0 Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.170919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" event={"ID":"04273ede-bf68-404e-af9a-93340dd6ed77","Type":"ContainerDied","Data":"2442767b04775260174e609123089318b83f861e15cb9d47bc112e1e89993b13"} Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.505693 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.629122 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir\") pod \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.629192 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access\") pod \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\" (UID: \"6aaff245-2b04-45a1-8304-0ea7b9d90d3c\") " Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.629350 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6aaff245-2b04-45a1-8304-0ea7b9d90d3c" (UID: "6aaff245-2b04-45a1-8304-0ea7b9d90d3c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.629742 4762 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.638578 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6aaff245-2b04-45a1-8304-0ea7b9d90d3c" (UID: "6aaff245-2b04-45a1-8304-0ea7b9d90d3c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.731799 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6aaff245-2b04-45a1-8304-0ea7b9d90d3c-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:24 crc kubenswrapper[4762]: I0308 00:25:24.814160 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.189116 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6aaff245-2b04-45a1-8304-0ea7b9d90d3c","Type":"ContainerDied","Data":"0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb"} Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.189501 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e34e3bfe889500fa638903b802f370ac98c5d38801f2fc5e84af7d1739a79cb" Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.189257 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.712302 4762 patch_prober.go:28] interesting pod/controller-manager-7cfb5dcc4f-rv9sv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" start-of-body= Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.712410 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": dial tcp 10.217.0.47:8443: connect: connection refused" Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.716154 4762 patch_prober.go:28] interesting pod/route-controller-manager-749b5bd996-kplzg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" start-of-body= Mar 08 00:25:25 crc kubenswrapper[4762]: I0308 00:25:25.716191 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": dial tcp 10.217.0.48:8443: connect: connection refused" Mar 08 00:25:27 crc kubenswrapper[4762]: I0308 00:25:27.356016 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:25:27 crc kubenswrapper[4762]: I0308 00:25:27.360377 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:25:28 crc kubenswrapper[4762]: E0308 00:25:28.554377 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:28 crc kubenswrapper[4762]: E0308 00:25:28.558981 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:28 crc kubenswrapper[4762]: E0308 00:25:28.563804 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 08 00:25:28 crc kubenswrapper[4762]: E0308 00:25:28.563885 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:32 crc kubenswrapper[4762]: I0308 00:25:32.997833 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:25:33 crc kubenswrapper[4762]: I0308 00:25:33.241345 4762 generic.go:334] "Generic (PLEG): container finished" podID="fe7222be-b489-4bcb-bc44-0a8933cde1c5" containerID="9532b00b8ca740ffacae166f71c7c571f648951120f006293386757418b516f6" exitCode=0 Mar 08 00:25:33 crc kubenswrapper[4762]: I0308 00:25:33.241406 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29548800-x52sd" event={"ID":"fe7222be-b489-4bcb-bc44-0a8933cde1c5","Type":"ContainerDied","Data":"9532b00b8ca740ffacae166f71c7c571f648951120f006293386757418b516f6"} Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.779818 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.780344 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gkqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7jhpd_openshift-marketplace(e304c866-47f5-4a38-b530-b816ec5e685a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.781735 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7jhpd" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.824203 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.824427 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn5lb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l4c9k_openshift-marketplace(30000013-c882-4eaa-a7f0-fc380ef4f09c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:34 crc kubenswrapper[4762]: E0308 00:25:34.825673 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l4c9k" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" Mar 08 00:25:35 crc kubenswrapper[4762]: I0308 00:25:35.256816 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ghr8q_47863e3b-949c-40f1-bdb3-2d940b78cda0/kube-multus-additional-cni-plugins/0.log" Mar 08 00:25:35 crc kubenswrapper[4762]: I0308 00:25:35.256879 4762 generic.go:334] "Generic (PLEG): container finished" podID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" exitCode=137 Mar 08 00:25:35 crc kubenswrapper[4762]: I0308 00:25:35.256991 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" event={"ID":"47863e3b-949c-40f1-bdb3-2d940b78cda0","Type":"ContainerDied","Data":"30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23"} Mar 08 00:25:36 crc kubenswrapper[4762]: E0308 00:25:36.423407 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7jhpd" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" Mar 08 00:25:36 crc kubenswrapper[4762]: E0308 00:25:36.423546 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l4c9k" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" Mar 08 00:25:36 crc kubenswrapper[4762]: E0308 00:25:36.496607 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 08 00:25:36 crc kubenswrapper[4762]: E0308 00:25:36.496864 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h62np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-k8bhh_openshift-marketplace(1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:36 crc kubenswrapper[4762]: E0308 00:25:36.498796 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-k8bhh" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" Mar 08 00:25:36 crc kubenswrapper[4762]: I0308 00:25:36.711767 4762 patch_prober.go:28] interesting pod/controller-manager-7cfb5dcc4f-rv9sv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:25:36 crc kubenswrapper[4762]: I0308 00:25:36.712298 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.47:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:25:36 crc kubenswrapper[4762]: I0308 00:25:36.716531 4762 patch_prober.go:28] interesting pod/route-controller-manager-749b5bd996-kplzg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 00:25:36 crc kubenswrapper[4762]: I0308 00:25:36.716571 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.48:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 00:25:37 crc kubenswrapper[4762]: E0308 00:25:37.966104 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-k8bhh" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.050794 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.082151 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.082660 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8s6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dcsdg_openshift-marketplace(210aa3ef-23bb-4e7b-9ff5-39cec85310ba): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.083860 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dcsdg" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.098395 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.117252 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.119058 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ghr8q_47863e3b-949c-40f1-bdb3-2d940b78cda0/kube-multus-additional-cni-plugins/0.log" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.119198 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123055 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123364 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7222be-b489-4bcb-bc44-0a8933cde1c5" containerName="image-pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123388 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7222be-b489-4bcb-bc44-0a8933cde1c5" containerName="image-pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123409 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e99834-40a7-4400-9ce9-c624e1407052" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123419 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e99834-40a7-4400-9ce9-c624e1407052" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123444 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123453 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123470 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123479 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123496 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6aaff245-2b04-45a1-8304-0ea7b9d90d3c" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123505 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6aaff245-2b04-45a1-8304-0ea7b9d90d3c" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.123515 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123523 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123651 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e99834-40a7-4400-9ce9-c624e1407052" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123669 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7222be-b489-4bcb-bc44-0a8933cde1c5" containerName="image-pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123682 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" containerName="route-controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123695 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" containerName="kube-multus-additional-cni-plugins" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123707 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" containerName="controller-manager" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.123715 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6aaff245-2b04-45a1-8304-0ea7b9d90d3c" containerName="pruner" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.124305 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.127312 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.130878 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.131020 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tcc5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qv7hs_openshift-marketplace(1b1f4525-a957-4708-b166-0b16f67cb20a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.137970 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qv7hs" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159304 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca\") pod \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159346 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles\") pod \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159367 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert\") pod \"04273ede-bf68-404e-af9a-93340dd6ed77\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159393 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist\") pod \"47863e3b-949c-40f1-bdb3-2d940b78cda0\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159429 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert\") pod \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159450 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca\") pod \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159488 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca\") pod \"04273ede-bf68-404e-af9a-93340dd6ed77\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159514 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready\") pod \"47863e3b-949c-40f1-bdb3-2d940b78cda0\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159539 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsxsb\" (UniqueName: \"kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb\") pod \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\" (UID: \"fe7222be-b489-4bcb-bc44-0a8933cde1c5\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159555 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfhmz\" (UniqueName: \"kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz\") pod \"47863e3b-949c-40f1-bdb3-2d940b78cda0\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159591 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config\") pod \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bgn\" (UniqueName: \"kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn\") pod \"04273ede-bf68-404e-af9a-93340dd6ed77\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159646 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir\") pod \"47863e3b-949c-40f1-bdb3-2d940b78cda0\" (UID: \"47863e3b-949c-40f1-bdb3-2d940b78cda0\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159669 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkjv9\" (UniqueName: \"kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9\") pod \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\" (UID: \"6522ba1b-d390-4b3a-b825-00f66d60a0e9\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.159699 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config\") pod \"04273ede-bf68-404e-af9a-93340dd6ed77\" (UID: \"04273ede-bf68-404e-af9a-93340dd6ed77\") " Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.161453 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config" (OuterVolumeSpecName: "config") pod "04273ede-bf68-404e-af9a-93340dd6ed77" (UID: "04273ede-bf68-404e-af9a-93340dd6ed77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.161982 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca" (OuterVolumeSpecName: "client-ca") pod "6522ba1b-d390-4b3a-b825-00f66d60a0e9" (UID: "6522ba1b-d390-4b3a-b825-00f66d60a0e9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.162349 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6522ba1b-d390-4b3a-b825-00f66d60a0e9" (UID: "6522ba1b-d390-4b3a-b825-00f66d60a0e9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.171017 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04273ede-bf68-404e-af9a-93340dd6ed77" (UID: "04273ede-bf68-404e-af9a-93340dd6ed77"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.172088 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "47863e3b-949c-40f1-bdb3-2d940b78cda0" (UID: "47863e3b-949c-40f1-bdb3-2d940b78cda0"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.237335 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready" (OuterVolumeSpecName: "ready") pod "47863e3b-949c-40f1-bdb3-2d940b78cda0" (UID: "47863e3b-949c-40f1-bdb3-2d940b78cda0"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.237914 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca" (OuterVolumeSpecName: "serviceca") pod "fe7222be-b489-4bcb-bc44-0a8933cde1c5" (UID: "fe7222be-b489-4bcb-bc44-0a8933cde1c5"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.238077 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.238316 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vzqll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wc85x_openshift-marketplace(63ac2172-da6d-436b-8cde-593837d65920): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.238579 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.238679 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvj22,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-x7tm2_openshift-marketplace(82abd8f0-adc8-4094-a833-073e1cc68f50): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.248353 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config" (OuterVolumeSpecName: "config") pod "6522ba1b-d390-4b3a-b825-00f66d60a0e9" (UID: "6522ba1b-d390-4b3a-b825-00f66d60a0e9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.248432 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-x7tm2" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.248994 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "47863e3b-949c-40f1-bdb3-2d940b78cda0" (UID: "47863e3b-949c-40f1-bdb3-2d940b78cda0"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.249557 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca" (OuterVolumeSpecName: "client-ca") pod "04273ede-bf68-404e-af9a-93340dd6ed77" (UID: "04273ede-bf68-404e-af9a-93340dd6ed77"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.249642 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wc85x" podUID="63ac2172-da6d-436b-8cde-593837d65920" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.258136 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261103 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261170 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261208 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261239 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261274 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lx8\" (UniqueName: \"kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261372 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261394 4762 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/47863e3b-949c-40f1-bdb3-2d940b78cda0-ready\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261403 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261415 4762 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/47863e3b-949c-40f1-bdb3-2d940b78cda0-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261426 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04273ede-bf68-404e-af9a-93340dd6ed77-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261439 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261449 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6522ba1b-d390-4b3a-b825-00f66d60a0e9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261461 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04273ede-bf68-404e-af9a-93340dd6ed77-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261471 4762 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/47863e3b-949c-40f1-bdb3-2d940b78cda0-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.261486 4762 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fe7222be-b489-4bcb-bc44-0a8933cde1c5-serviceca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.270380 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb" (OuterVolumeSpecName: "kube-api-access-lsxsb") pod "fe7222be-b489-4bcb-bc44-0a8933cde1c5" (UID: "fe7222be-b489-4bcb-bc44-0a8933cde1c5"). InnerVolumeSpecName "kube-api-access-lsxsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.270980 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9" (OuterVolumeSpecName: "kube-api-access-pkjv9") pod "6522ba1b-d390-4b3a-b825-00f66d60a0e9" (UID: "6522ba1b-d390-4b3a-b825-00f66d60a0e9"). InnerVolumeSpecName "kube-api-access-pkjv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.271049 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6522ba1b-d390-4b3a-b825-00f66d60a0e9" (UID: "6522ba1b-d390-4b3a-b825-00f66d60a0e9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.282083 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz" (OuterVolumeSpecName: "kube-api-access-pfhmz") pod "47863e3b-949c-40f1-bdb3-2d940b78cda0" (UID: "47863e3b-949c-40f1-bdb3-2d940b78cda0"). InnerVolumeSpecName "kube-api-access-pfhmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.287675 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn" (OuterVolumeSpecName: "kube-api-access-c5bgn") pod "04273ede-bf68-404e-af9a-93340dd6ed77" (UID: "04273ede-bf68-404e-af9a-93340dd6ed77"). InnerVolumeSpecName "kube-api-access-c5bgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.303866 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" event={"ID":"6522ba1b-d390-4b3a-b825-00f66d60a0e9","Type":"ContainerDied","Data":"df1992736903297094b3b0cc35e377165e94c9a52b4a884b8ce407a066260fb3"} Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.303935 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.303948 4762 scope.go:117] "RemoveContainer" containerID="c8a17a49e9db4c5cf2b96c0063582987e9ac093845352682c8852165c30df648" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.306469 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-ghr8q_47863e3b-949c-40f1-bdb3-2d940b78cda0/kube-multus-additional-cni-plugins/0.log" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.306575 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" event={"ID":"47863e3b-949c-40f1-bdb3-2d940b78cda0","Type":"ContainerDied","Data":"d96f1a91aac0066b56b4679f0b13c57e3d6ea049bb6e828c5514cac5916b6571"} Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.306739 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-ghr8q" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.329406 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" event={"ID":"04273ede-bf68-404e-af9a-93340dd6ed77","Type":"ContainerDied","Data":"4c64c0d2576d72575970d182f47ed678591e85f7d9357b3d01bb07e983904836"} Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.329979 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.334940 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29548800-x52sd" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.337565 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29548800-x52sd" event={"ID":"fe7222be-b489-4bcb-bc44-0a8933cde1c5","Type":"ContainerDied","Data":"8151e2042ac21e0af491f452ea8917140a9c046741c15e70a9a133078ca8ea76"} Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.337618 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8151e2042ac21e0af491f452ea8917140a9c046741c15e70a9a133078ca8ea76" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362594 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362658 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362693 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362741 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362776 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lx8\" (UniqueName: \"kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362869 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6522ba1b-d390-4b3a-b825-00f66d60a0e9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362887 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfhmz\" (UniqueName: \"kubernetes.io/projected/47863e3b-949c-40f1-bdb3-2d940b78cda0-kube-api-access-pfhmz\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362899 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsxsb\" (UniqueName: \"kubernetes.io/projected/fe7222be-b489-4bcb-bc44-0a8933cde1c5-kube-api-access-lsxsb\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362911 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bgn\" (UniqueName: \"kubernetes.io/projected/04273ede-bf68-404e-af9a-93340dd6ed77-kube-api-access-c5bgn\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.362922 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkjv9\" (UniqueName: \"kubernetes.io/projected/6522ba1b-d390-4b3a-b825-00f66d60a0e9-kube-api-access-pkjv9\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.368009 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qv7hs" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.369227 4762 scope.go:117] "RemoveContainer" containerID="30846f2355ea0583dd129fc3bc21d56f1644db85f37c7e512531d71b23121d23" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.369713 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dcsdg" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.369848 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wc85x" podUID="63ac2172-da6d-436b-8cde-593837d65920" Mar 08 00:25:38 crc kubenswrapper[4762]: E0308 00:25:38.369922 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-x7tm2" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.372647 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.373368 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.373790 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.393348 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.400745 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ghr8q"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.411167 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-ghr8q"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.421493 4762 scope.go:117] "RemoveContainer" containerID="2442767b04775260174e609123089318b83f861e15cb9d47bc112e1e89993b13" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.424168 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lx8\" (UniqueName: \"kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8\") pod \"controller-manager-bfd9f6b94-rbsmm\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: W0308 00:25:38.492926 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-b740f0c9e5f8a3336c29b173a0763018ead414584e2f6b4dfee699d4f8e89f78 WatchSource:0}: Error finding container b740f0c9e5f8a3336c29b173a0763018ead414584e2f6b4dfee699d4f8e89f78: Status 404 returned error can't find the container with id b740f0c9e5f8a3336c29b173a0763018ead414584e2f6b4dfee699d4f8e89f78 Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.511990 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.515241 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cfb5dcc4f-rv9sv"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.537834 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.546044 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749b5bd996-kplzg"] Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.566170 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:38 crc kubenswrapper[4762]: I0308 00:25:38.816235 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:38 crc kubenswrapper[4762]: W0308 00:25:38.819003 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53485f74_9ce0_4382_8219_5e9a42956618.slice/crio-c8b5b4c5d8a352e4b61164202d87be79c97efdf836aaea696095f8dbacf144cb WatchSource:0}: Error finding container c8b5b4c5d8a352e4b61164202d87be79c97efdf836aaea696095f8dbacf144cb: Status 404 returned error can't find the container with id c8b5b4c5d8a352e4b61164202d87be79c97efdf836aaea696095f8dbacf144cb Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.285857 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04273ede-bf68-404e-af9a-93340dd6ed77" path="/var/lib/kubelet/pods/04273ede-bf68-404e-af9a-93340dd6ed77/volumes" Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.287072 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47863e3b-949c-40f1-bdb3-2d940b78cda0" path="/var/lib/kubelet/pods/47863e3b-949c-40f1-bdb3-2d940b78cda0/volumes" Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.287714 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6522ba1b-d390-4b3a-b825-00f66d60a0e9" path="/var/lib/kubelet/pods/6522ba1b-d390-4b3a-b825-00f66d60a0e9/volumes" Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.342664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"0531fa77f3927a7a516f66f79041b773411b9651b95eddb0a991030f251ad4ca"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.342714 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2c794f54013850eea865042b939f5e174ca5b3cbc73a9c3bf12e61af1dac9b02"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.347641 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" event={"ID":"53485f74-9ce0-4382-8219-5e9a42956618","Type":"ContainerStarted","Data":"704e1a5066431b4a6878c791d39a641b6aab61dab7d7427b03d00ead0a8a23c5"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.347672 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" event={"ID":"53485f74-9ce0-4382-8219-5e9a42956618","Type":"ContainerStarted","Data":"c8b5b4c5d8a352e4b61164202d87be79c97efdf836aaea696095f8dbacf144cb"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.351136 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c00d76e6a8f1f64d891bec25da929def39dc276a04646d426278f8ac4712ca62"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.351198 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"86f4d3fe6ae79c0abb92cf1cd9fa54b1e209b35af944be0d0183bf659dc6d077"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.358569 4762 generic.go:334] "Generic (PLEG): container finished" podID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerID="c2123713275486e7309a836f890300537812d275a713049177980929b010f374" exitCode=0 Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.358680 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerDied","Data":"c2123713275486e7309a836f890300537812d275a713049177980929b010f374"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.363405 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d7d048849688a80ccbc3f1e8325b6ab745f54751cfeabe81b1f10a82c293965c"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.363438 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b740f0c9e5f8a3336c29b173a0763018ead414584e2f6b4dfee699d4f8e89f78"} Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.363675 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:25:39 crc kubenswrapper[4762]: I0308 00:25:39.401569 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" podStartSLOduration=19.401542353 podStartE2EDuration="19.401542353s" podCreationTimestamp="2026-03-08 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:39.397451947 +0000 UTC m=+160.871596291" watchObservedRunningTime="2026-03-08 00:25:39.401542353 +0000 UTC m=+160.875686697" Mar 08 00:25:40 crc kubenswrapper[4762]: I0308 00:25:40.370691 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:40 crc kubenswrapper[4762]: I0308 00:25:40.374605 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:40 crc kubenswrapper[4762]: I0308 00:25:40.907279 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.044178 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.047120 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.049079 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.052120 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.052353 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.052474 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.052551 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.052680 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.071638 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.112105 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.112173 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.112199 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.112218 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txm7p\" (UniqueName: \"kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.212943 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.212993 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.213018 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.213039 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txm7p\" (UniqueName: \"kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.214411 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.214952 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.227774 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.228968 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txm7p\" (UniqueName: \"kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p\") pod \"route-controller-manager-7c9ccb858b-blnmz\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.369204 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:41 crc kubenswrapper[4762]: I0308 00:25:41.839429 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.389993 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerStarted","Data":"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2"} Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.392619 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" event={"ID":"8de4ba02-a54d-4ffb-a781-267c6a741abe","Type":"ContainerStarted","Data":"7235c25da98b7ff58a4ed662356d46d92208fd90aac12703cffdc723bc945747"} Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.392671 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" event={"ID":"8de4ba02-a54d-4ffb-a781-267c6a741abe","Type":"ContainerStarted","Data":"20ca4f75465887dab3c562649e862663de78f35cf4ca4306d83b2c2a03fd9b19"} Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.392821 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" podUID="53485f74-9ce0-4382-8219-5e9a42956618" containerName="controller-manager" containerID="cri-o://704e1a5066431b4a6878c791d39a641b6aab61dab7d7427b03d00ead0a8a23c5" gracePeriod=30 Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.418645 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g7kzd" podStartSLOduration=4.003970595 podStartE2EDuration="34.418622083s" podCreationTimestamp="2026-03-08 00:25:08 +0000 UTC" firstStartedPulling="2026-03-08 00:25:10.759858762 +0000 UTC m=+132.234003096" lastFinishedPulling="2026-03-08 00:25:41.17451024 +0000 UTC m=+162.648654584" observedRunningTime="2026-03-08 00:25:42.414386992 +0000 UTC m=+163.888531376" watchObservedRunningTime="2026-03-08 00:25:42.418622083 +0000 UTC m=+163.892766427" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.439367 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" podStartSLOduration=1.439331026 podStartE2EDuration="1.439331026s" podCreationTimestamp="2026-03-08 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:42.430939705 +0000 UTC m=+163.905084059" watchObservedRunningTime="2026-03-08 00:25:42.439331026 +0000 UTC m=+163.913475400" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.616545 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.617189 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.619950 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.623071 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.631345 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.636401 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.636472 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.738437 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.738538 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.738679 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.763367 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:42 crc kubenswrapper[4762]: I0308 00:25:42.991980 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.404275 4762 generic.go:334] "Generic (PLEG): container finished" podID="53485f74-9ce0-4382-8219-5e9a42956618" containerID="704e1a5066431b4a6878c791d39a641b6aab61dab7d7427b03d00ead0a8a23c5" exitCode=0 Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.404373 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" event={"ID":"53485f74-9ce0-4382-8219-5e9a42956618","Type":"ContainerDied","Data":"704e1a5066431b4a6878c791d39a641b6aab61dab7d7427b03d00ead0a8a23c5"} Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.404791 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.415174 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.417279 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.503171 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.539017 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:25:43 crc kubenswrapper[4762]: E0308 00:25:43.539399 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53485f74-9ce0-4382-8219-5e9a42956618" containerName="controller-manager" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.539428 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="53485f74-9ce0-4382-8219-5e9a42956618" containerName="controller-manager" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.539563 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="53485f74-9ce0-4382-8219-5e9a42956618" containerName="controller-manager" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.540331 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.586934 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.652917 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53485f74-9ce0-4382-8219-5e9a42956618" (UID: "53485f74-9ce0-4382-8219-5e9a42956618"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.652998 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles\") pod \"53485f74-9ce0-4382-8219-5e9a42956618\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.653141 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert\") pod \"53485f74-9ce0-4382-8219-5e9a42956618\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.653236 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca\") pod \"53485f74-9ce0-4382-8219-5e9a42956618\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.654039 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca" (OuterVolumeSpecName: "client-ca") pod "53485f74-9ce0-4382-8219-5e9a42956618" (UID: "53485f74-9ce0-4382-8219-5e9a42956618"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.654202 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config\") pod \"53485f74-9ce0-4382-8219-5e9a42956618\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.654833 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config" (OuterVolumeSpecName: "config") pod "53485f74-9ce0-4382-8219-5e9a42956618" (UID: "53485f74-9ce0-4382-8219-5e9a42956618"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.654911 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lx8\" (UniqueName: \"kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8\") pod \"53485f74-9ce0-4382-8219-5e9a42956618\" (UID: \"53485f74-9ce0-4382-8219-5e9a42956618\") " Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655042 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655084 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655107 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x829m\" (UniqueName: \"kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655134 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655181 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655279 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655299 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.655314 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53485f74-9ce0-4382-8219-5e9a42956618-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.660921 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8" (OuterVolumeSpecName: "kube-api-access-72lx8") pod "53485f74-9ce0-4382-8219-5e9a42956618" (UID: "53485f74-9ce0-4382-8219-5e9a42956618"). InnerVolumeSpecName "kube-api-access-72lx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.663154 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53485f74-9ce0-4382-8219-5e9a42956618" (UID: "53485f74-9ce0-4382-8219-5e9a42956618"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756508 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756606 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756630 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756646 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756666 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x829m\" (UniqueName: \"kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756725 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53485f74-9ce0-4382-8219-5e9a42956618-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.756736 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lx8\" (UniqueName: \"kubernetes.io/projected/53485f74-9ce0-4382-8219-5e9a42956618-kube-api-access-72lx8\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.757730 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.757986 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.758240 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.760909 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.774664 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x829m\" (UniqueName: \"kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m\") pod \"controller-manager-d4bb595c4-x62ll\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:43 crc kubenswrapper[4762]: I0308 00:25:43.866824 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.144438 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.414048 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" event={"ID":"53485f74-9ce0-4382-8219-5e9a42956618","Type":"ContainerDied","Data":"c8b5b4c5d8a352e4b61164202d87be79c97efdf836aaea696095f8dbacf144cb"} Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.414653 4762 scope.go:117] "RemoveContainer" containerID="704e1a5066431b4a6878c791d39a641b6aab61dab7d7427b03d00ead0a8a23c5" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.414199 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.416388 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90","Type":"ContainerStarted","Data":"b15034959db559f36460e1a9412e6ad5e7bda514fbd405978c5f4bb7a3bd4bb9"} Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.416438 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90","Type":"ContainerStarted","Data":"889d571868eaf2c1c535db45dcf6084dcff2d9728ce0ec84c7c6162ca53e5e07"} Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.421547 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" event={"ID":"83fd1189-6cde-451a-8c08-18acd7921342","Type":"ContainerStarted","Data":"2a3f947cc43bf1ddd4ed4d5b64a50b4ec84d66babc94d50c267cb6eab4a02eb7"} Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.421794 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" event={"ID":"83fd1189-6cde-451a-8c08-18acd7921342","Type":"ContainerStarted","Data":"f484163fb5aab00925f58be30c57da8435fc767c27af0670de83661ea1429071"} Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.421955 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.427257 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.449211 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.449189011 podStartE2EDuration="2.449189011s" podCreationTimestamp="2026-03-08 00:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:44.444137538 +0000 UTC m=+165.918281882" watchObservedRunningTime="2026-03-08 00:25:44.449189011 +0000 UTC m=+165.923333355" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.478885 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" podStartSLOduration=4.478864015 podStartE2EDuration="4.478864015s" podCreationTimestamp="2026-03-08 00:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:44.474978737 +0000 UTC m=+165.949123081" watchObservedRunningTime="2026-03-08 00:25:44.478864015 +0000 UTC m=+165.953008359" Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.489569 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:44 crc kubenswrapper[4762]: I0308 00:25:44.492211 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bfd9f6b94-rbsmm"] Mar 08 00:25:45 crc kubenswrapper[4762]: I0308 00:25:45.269603 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53485f74-9ce0-4382-8219-5e9a42956618" path="/var/lib/kubelet/pods/53485f74-9ce0-4382-8219-5e9a42956618/volumes" Mar 08 00:25:45 crc kubenswrapper[4762]: I0308 00:25:45.429496 4762 generic.go:334] "Generic (PLEG): container finished" podID="e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" containerID="b15034959db559f36460e1a9412e6ad5e7bda514fbd405978c5f4bb7a3bd4bb9" exitCode=0 Mar 08 00:25:45 crc kubenswrapper[4762]: I0308 00:25:45.429607 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90","Type":"ContainerDied","Data":"b15034959db559f36460e1a9412e6ad5e7bda514fbd405978c5f4bb7a3bd4bb9"} Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.722168 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.805089 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir\") pod \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.805163 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access\") pod \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\" (UID: \"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90\") " Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.805316 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" (UID: "e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.812515 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" (UID: "e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.906318 4762 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:46 crc kubenswrapper[4762]: I0308 00:25:46.906371 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:47 crc kubenswrapper[4762]: I0308 00:25:47.446510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90","Type":"ContainerDied","Data":"889d571868eaf2c1c535db45dcf6084dcff2d9728ce0ec84c7c6162ca53e5e07"} Mar 08 00:25:47 crc kubenswrapper[4762]: I0308 00:25:47.447064 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="889d571868eaf2c1c535db45dcf6084dcff2d9728ce0ec84c7c6162ca53e5e07" Mar 08 00:25:47 crc kubenswrapper[4762]: I0308 00:25:47.446823 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.405925 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 08 00:25:48 crc kubenswrapper[4762]: E0308 00:25:48.406247 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" containerName="pruner" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.406262 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" containerName="pruner" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.406380 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e385c1f6-68e9-4f7e-ba53-cdb7f6dcdb90" containerName="pruner" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.406886 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.412797 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.412855 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.421629 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.431203 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.431251 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.431354 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.532119 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.532175 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.532240 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.532263 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.532362 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.553422 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access\") pod \"installer-9-crc\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:48 crc kubenswrapper[4762]: I0308 00:25:48.733148 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.049211 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.155574 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.156286 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.464027 4762 generic.go:334] "Generic (PLEG): container finished" podID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerID="bc353fb9bef87ed40ef7218999b5f8250966642cb909d15a46905980b00c5ced" exitCode=0 Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.464108 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerDied","Data":"bc353fb9bef87ed40ef7218999b5f8250966642cb909d15a46905980b00c5ced"} Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.465973 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerStarted","Data":"38a1afdae8e8c7e0c4d72770d10fb16878506bb51fa550f35139f8f08ebf3670"} Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.467886 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f89876a6-46ce-4acd-8078-c41d23a2330e","Type":"ContainerStarted","Data":"76cc49238a18de54dce8f4d8a706d403438e2a9b157bad1db5d3dd155d508cdb"} Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.467929 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f89876a6-46ce-4acd-8078-c41d23a2330e","Type":"ContainerStarted","Data":"8f5ee1ce0623b9ba46af3be82606909ccf42d4d06babe3693092a511dec4f574"} Mar 08 00:25:49 crc kubenswrapper[4762]: I0308 00:25:49.539374 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.539357504 podStartE2EDuration="1.539357504s" podCreationTimestamp="2026-03-08 00:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:25:49.534469567 +0000 UTC m=+171.008613901" watchObservedRunningTime="2026-03-08 00:25:49.539357504 +0000 UTC m=+171.013501848" Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.289118 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g7kzd" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="registry-server" probeResult="failure" output=< Mar 08 00:25:50 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:25:50 crc kubenswrapper[4762]: > Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.478559 4762 generic.go:334] "Generic (PLEG): container finished" podID="e304c866-47f5-4a38-b530-b816ec5e685a" containerID="999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43" exitCode=0 Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.478630 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerDied","Data":"999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43"} Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.482245 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerStarted","Data":"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb"} Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.486872 4762 generic.go:334] "Generic (PLEG): container finished" podID="63ac2172-da6d-436b-8cde-593837d65920" containerID="f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360" exitCode=0 Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.486937 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerDied","Data":"f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360"} Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.490268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerStarted","Data":"30480bd761de474c85ccb5d3a9140f25f5375622ad707eda2f845d1fee55d32c"} Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.492958 4762 generic.go:334] "Generic (PLEG): container finished" podID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerID="38a1afdae8e8c7e0c4d72770d10fb16878506bb51fa550f35139f8f08ebf3670" exitCode=0 Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.493025 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerDied","Data":"38a1afdae8e8c7e0c4d72770d10fb16878506bb51fa550f35139f8f08ebf3670"} Mar 08 00:25:50 crc kubenswrapper[4762]: I0308 00:25:50.588566 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k8bhh" podStartSLOduration=3.296776741 podStartE2EDuration="43.588543632s" podCreationTimestamp="2026-03-08 00:25:07 +0000 UTC" firstStartedPulling="2026-03-08 00:25:09.637131328 +0000 UTC m=+131.111275672" lastFinishedPulling="2026-03-08 00:25:49.928898219 +0000 UTC m=+171.403042563" observedRunningTime="2026-03-08 00:25:50.584932414 +0000 UTC m=+172.059076748" watchObservedRunningTime="2026-03-08 00:25:50.588543632 +0000 UTC m=+172.062687976" Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.502605 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerStarted","Data":"f5a6dacb6984b894bc6c80aab5ad9ea6a896efbadda90eb28afac6d737b48a9b"} Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.505869 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerStarted","Data":"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452"} Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.507696 4762 generic.go:334] "Generic (PLEG): container finished" podID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerID="4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb" exitCode=0 Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.507806 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerDied","Data":"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb"} Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.509851 4762 generic.go:334] "Generic (PLEG): container finished" podID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerID="9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632" exitCode=0 Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.509911 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerDied","Data":"9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632"} Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.512546 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerStarted","Data":"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475"} Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.562690 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l4c9k" podStartSLOduration=2.327649149 podStartE2EDuration="43.562663332s" podCreationTimestamp="2026-03-08 00:25:08 +0000 UTC" firstStartedPulling="2026-03-08 00:25:09.642414006 +0000 UTC m=+131.116558350" lastFinishedPulling="2026-03-08 00:25:50.877428189 +0000 UTC m=+172.351572533" observedRunningTime="2026-03-08 00:25:51.538125394 +0000 UTC m=+173.012269738" watchObservedRunningTime="2026-03-08 00:25:51.562663332 +0000 UTC m=+173.036807676" Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.594703 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wc85x" podStartSLOduration=4.317024263 podStartE2EDuration="45.594683627s" podCreationTimestamp="2026-03-08 00:25:06 +0000 UTC" firstStartedPulling="2026-03-08 00:25:09.633067269 +0000 UTC m=+131.107211613" lastFinishedPulling="2026-03-08 00:25:50.910726633 +0000 UTC m=+172.384870977" observedRunningTime="2026-03-08 00:25:51.564411449 +0000 UTC m=+173.038555793" watchObservedRunningTime="2026-03-08 00:25:51.594683627 +0000 UTC m=+173.068827981" Mar 08 00:25:51 crc kubenswrapper[4762]: I0308 00:25:51.635731 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7jhpd" podStartSLOduration=2.235866037 podStartE2EDuration="46.635706785s" podCreationTimestamp="2026-03-08 00:25:05 +0000 UTC" firstStartedPulling="2026-03-08 00:25:06.53666284 +0000 UTC m=+128.010807184" lastFinishedPulling="2026-03-08 00:25:50.936503588 +0000 UTC m=+172.410647932" observedRunningTime="2026-03-08 00:25:51.63294915 +0000 UTC m=+173.107093504" watchObservedRunningTime="2026-03-08 00:25:51.635706785 +0000 UTC m=+173.109851129" Mar 08 00:25:52 crc kubenswrapper[4762]: I0308 00:25:52.519850 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerStarted","Data":"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937"} Mar 08 00:25:52 crc kubenswrapper[4762]: I0308 00:25:52.523075 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerStarted","Data":"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd"} Mar 08 00:25:52 crc kubenswrapper[4762]: I0308 00:25:52.555063 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x7tm2" podStartSLOduration=2.200227908 podStartE2EDuration="47.55504796s" podCreationTimestamp="2026-03-08 00:25:05 +0000 UTC" firstStartedPulling="2026-03-08 00:25:06.557516061 +0000 UTC m=+128.031660405" lastFinishedPulling="2026-03-08 00:25:51.912336113 +0000 UTC m=+173.386480457" observedRunningTime="2026-03-08 00:25:52.552444381 +0000 UTC m=+174.026588725" watchObservedRunningTime="2026-03-08 00:25:52.55504796 +0000 UTC m=+174.029192304" Mar 08 00:25:52 crc kubenswrapper[4762]: I0308 00:25:52.570375 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dcsdg" podStartSLOduration=3.090471365 podStartE2EDuration="48.570355505s" podCreationTimestamp="2026-03-08 00:25:04 +0000 UTC" firstStartedPulling="2026-03-08 00:25:06.517075529 +0000 UTC m=+127.991219873" lastFinishedPulling="2026-03-08 00:25:51.996959669 +0000 UTC m=+173.471104013" observedRunningTime="2026-03-08 00:25:52.566950705 +0000 UTC m=+174.041095049" watchObservedRunningTime="2026-03-08 00:25:52.570355505 +0000 UTC m=+174.044499849" Mar 08 00:25:54 crc kubenswrapper[4762]: I0308 00:25:54.541533 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerID="7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235" exitCode=0 Mar 08 00:25:54 crc kubenswrapper[4762]: I0308 00:25:54.541624 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerDied","Data":"7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235"} Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.185631 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.185730 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.318393 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.527327 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.527425 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.550982 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerStarted","Data":"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14"} Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.586606 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qv7hs" podStartSLOduration=2.193022485 podStartE2EDuration="50.586553032s" podCreationTimestamp="2026-03-08 00:25:05 +0000 UTC" firstStartedPulling="2026-03-08 00:25:06.541826033 +0000 UTC m=+128.015970377" lastFinishedPulling="2026-03-08 00:25:54.93535657 +0000 UTC m=+176.409500924" observedRunningTime="2026-03-08 00:25:55.584914709 +0000 UTC m=+177.059059063" watchObservedRunningTime="2026-03-08 00:25:55.586553032 +0000 UTC m=+177.060697386" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.604687 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.743594 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.743659 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.967441 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:55 crc kubenswrapper[4762]: I0308 00:25:55.967558 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:56 crc kubenswrapper[4762]: I0308 00:25:56.026385 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:56 crc kubenswrapper[4762]: I0308 00:25:56.626465 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:25:56 crc kubenswrapper[4762]: I0308 00:25:56.805402 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-qv7hs" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="registry-server" probeResult="failure" output=< Mar 08 00:25:56 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:25:56 crc kubenswrapper[4762]: > Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.320506 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.321062 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.390261 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.618738 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.741644 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.741739 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:57 crc kubenswrapper[4762]: I0308 00:25:57.789973 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.027318 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerName="oauth-openshift" containerID="cri-o://b67bb9155fbb001682129da5a6f1ceed81a1b9830563a4766887c717d2c39532" gracePeriod=15 Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.578710 4762 generic.go:334] "Generic (PLEG): container finished" podID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerID="b67bb9155fbb001682129da5a6f1ceed81a1b9830563a4766887c717d2c39532" exitCode=0 Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.578884 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" event={"ID":"cf5ac2df-231b-4019-a6ad-a9485ee8802e","Type":"ContainerDied","Data":"b67bb9155fbb001682129da5a6f1ceed81a1b9830563a4766887c717d2c39532"} Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.578916 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" event={"ID":"cf5ac2df-231b-4019-a6ad-a9485ee8802e","Type":"ContainerDied","Data":"22208a5ce329fbcf2e1ee1e7f07aa51a1194d1330087ab7367e8bffccf94f9fe"} Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.579024 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22208a5ce329fbcf2e1ee1e7f07aa51a1194d1330087ab7367e8bffccf94f9fe" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.610630 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.635686 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.713465 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.713582 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.713659 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.714734 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.713706 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.714846 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.714991 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715070 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715113 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715150 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715218 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715260 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715295 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715330 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715354 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715477 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715513 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xm4c\" (UniqueName: \"kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c\") pod \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\" (UID: \"cf5ac2df-231b-4019-a6ad-a9485ee8802e\") " Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.715930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.716219 4762 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.716237 4762 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.716251 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.716264 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.716649 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.723048 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c" (OuterVolumeSpecName: "kube-api-access-7xm4c") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "kube-api-access-7xm4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.723443 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.732913 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.733553 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.733578 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.733788 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.733731 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.736078 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.736722 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.745216 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.745695 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "cf5ac2df-231b-4019-a6ad-a9485ee8802e" (UID: "cf5ac2df-231b-4019-a6ad-a9485ee8802e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.799601 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.821676 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.822210 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xm4c\" (UniqueName: \"kubernetes.io/projected/cf5ac2df-231b-4019-a6ad-a9485ee8802e-kube-api-access-7xm4c\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.822554 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.822917 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.823149 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.823736 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.823987 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.824174 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.824361 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:58 crc kubenswrapper[4762]: I0308 00:25:58.824566 4762 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5ac2df-231b-4019-a6ad-a9485ee8802e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.232973 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.310937 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.517697 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.519502 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7jhpd" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="registry-server" containerID="cri-o://6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452" gracePeriod=2 Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.587134 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5pmsg" Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.618919 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.622285 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5pmsg"] Mar 08 00:25:59 crc kubenswrapper[4762]: I0308 00:25:59.648284 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.034205 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139156 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548826-5sjq2"] Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.139459 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="extract-content" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139475 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="extract-content" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.139495 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="extract-utilities" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139503 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="extract-utilities" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.139520 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerName="oauth-openshift" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139530 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerName="oauth-openshift" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.139543 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="registry-server" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139551 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="registry-server" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139669 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" containerName="registry-server" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.139690 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" containerName="oauth-openshift" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.140155 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.142465 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.144368 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.144981 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.145629 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548826-5sjq2"] Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.146480 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content\") pod \"e304c866-47f5-4a38-b530-b816ec5e685a\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.146804 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gkqv\" (UniqueName: \"kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv\") pod \"e304c866-47f5-4a38-b530-b816ec5e685a\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.146849 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities\") pod \"e304c866-47f5-4a38-b530-b816ec5e685a\" (UID: \"e304c866-47f5-4a38-b530-b816ec5e685a\") " Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.148339 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities" (OuterVolumeSpecName: "utilities") pod "e304c866-47f5-4a38-b530-b816ec5e685a" (UID: "e304c866-47f5-4a38-b530-b816ec5e685a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.197220 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv" (OuterVolumeSpecName: "kube-api-access-5gkqv") pod "e304c866-47f5-4a38-b530-b816ec5e685a" (UID: "e304c866-47f5-4a38-b530-b816ec5e685a"). InnerVolumeSpecName "kube-api-access-5gkqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.243018 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e304c866-47f5-4a38-b530-b816ec5e685a" (UID: "e304c866-47f5-4a38-b530-b816ec5e685a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.248397 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxmdr\" (UniqueName: \"kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr\") pod \"auto-csr-approver-29548826-5sjq2\" (UID: \"93accc2a-5975-4e5f-8927-264224130aca\") " pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.248480 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gkqv\" (UniqueName: \"kubernetes.io/projected/e304c866-47f5-4a38-b530-b816ec5e685a-kube-api-access-5gkqv\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.248494 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.248511 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e304c866-47f5-4a38-b530-b816ec5e685a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.350704 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxmdr\" (UniqueName: \"kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr\") pod \"auto-csr-approver-29548826-5sjq2\" (UID: \"93accc2a-5975-4e5f-8927-264224130aca\") " pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.383121 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxmdr\" (UniqueName: \"kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr\") pod \"auto-csr-approver-29548826-5sjq2\" (UID: \"93accc2a-5975-4e5f-8927-264224130aca\") " pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.535282 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.601484 4762 generic.go:334] "Generic (PLEG): container finished" podID="e304c866-47f5-4a38-b530-b816ec5e685a" containerID="6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452" exitCode=0 Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.601526 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerDied","Data":"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452"} Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.601587 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7jhpd" event={"ID":"e304c866-47f5-4a38-b530-b816ec5e685a","Type":"ContainerDied","Data":"eb68046d04bc9bde7f77e47f4ed7107d6970de2c09efba3f4d6cd9ca274f7f93"} Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.601612 4762 scope.go:117] "RemoveContainer" containerID="6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.601668 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7jhpd" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.631091 4762 scope.go:117] "RemoveContainer" containerID="999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.679275 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.684747 4762 scope.go:117] "RemoveContainer" containerID="f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.714862 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7jhpd"] Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.725976 4762 scope.go:117] "RemoveContainer" containerID="6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.729521 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452\": container with ID starting with 6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452 not found: ID does not exist" containerID="6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.729575 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452"} err="failed to get container status \"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452\": rpc error: code = NotFound desc = could not find container \"6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452\": container with ID starting with 6889c6c6f7b4aaa7b6e429059ec816f6af28f39469e0c2c0066c03722877e452 not found: ID does not exist" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.729615 4762 scope.go:117] "RemoveContainer" containerID="999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.738097 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43\": container with ID starting with 999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43 not found: ID does not exist" containerID="999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.738148 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43"} err="failed to get container status \"999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43\": rpc error: code = NotFound desc = could not find container \"999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43\": container with ID starting with 999810a61f599323f439df6f7fec68be66e53c5406efb5b5600773de8c018f43 not found: ID does not exist" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.738185 4762 scope.go:117] "RemoveContainer" containerID="f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2" Mar 08 00:26:00 crc kubenswrapper[4762]: E0308 00:26:00.739233 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2\": container with ID starting with f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2 not found: ID does not exist" containerID="f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.739288 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2"} err="failed to get container status \"f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2\": rpc error: code = NotFound desc = could not find container \"f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2\": container with ID starting with f1741dec260d5b5dc8a460892a57c7de4a97f5e64ecbf362e6389d80a6f0d3c2 not found: ID does not exist" Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.912715 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.913096 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" podUID="83fd1189-6cde-451a-8c08-18acd7921342" containerName="controller-manager" containerID="cri-o://2a3f947cc43bf1ddd4ed4d5b64a50b4ec84d66babc94d50c267cb6eab4a02eb7" gracePeriod=30 Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.936210 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:26:00 crc kubenswrapper[4762]: I0308 00:26:00.938258 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerName="route-controller-manager" containerID="cri-o://7235c25da98b7ff58a4ed662356d46d92208fd90aac12703cffdc723bc945747" gracePeriod=30 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.012828 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548826-5sjq2"] Mar 08 00:26:01 crc kubenswrapper[4762]: W0308 00:26:01.020902 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93accc2a_5975_4e5f_8927_264224130aca.slice/crio-47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4 WatchSource:0}: Error finding container 47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4: Status 404 returned error can't find the container with id 47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.270852 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5ac2df-231b-4019-a6ad-a9485ee8802e" path="/var/lib/kubelet/pods/cf5ac2df-231b-4019-a6ad-a9485ee8802e/volumes" Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.271487 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e304c866-47f5-4a38-b530-b816ec5e685a" path="/var/lib/kubelet/pods/e304c866-47f5-4a38-b530-b816ec5e685a/volumes" Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.370540 4762 patch_prober.go:28] interesting pod/route-controller-manager-7c9ccb858b-blnmz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.371205 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": dial tcp 10.217.0.58:8443: connect: connection refused" Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.615538 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" event={"ID":"93accc2a-5975-4e5f-8927-264224130aca","Type":"ContainerStarted","Data":"47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4"} Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.618552 4762 generic.go:334] "Generic (PLEG): container finished" podID="83fd1189-6cde-451a-8c08-18acd7921342" containerID="2a3f947cc43bf1ddd4ed4d5b64a50b4ec84d66babc94d50c267cb6eab4a02eb7" exitCode=0 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.618618 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" event={"ID":"83fd1189-6cde-451a-8c08-18acd7921342","Type":"ContainerDied","Data":"2a3f947cc43bf1ddd4ed4d5b64a50b4ec84d66babc94d50c267cb6eab4a02eb7"} Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.621339 4762 generic.go:334] "Generic (PLEG): container finished" podID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerID="7235c25da98b7ff58a4ed662356d46d92208fd90aac12703cffdc723bc945747" exitCode=0 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.621389 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" event={"ID":"8de4ba02-a54d-4ffb-a781-267c6a741abe","Type":"ContainerDied","Data":"7235c25da98b7ff58a4ed662356d46d92208fd90aac12703cffdc723bc945747"} Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.712336 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.712705 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k8bhh" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="registry-server" containerID="cri-o://30480bd761de474c85ccb5d3a9140f25f5375622ad707eda2f845d1fee55d32c" gracePeriod=2 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.922464 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.922770 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g7kzd" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="registry-server" containerID="cri-o://751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2" gracePeriod=2 Mar 08 00:26:01 crc kubenswrapper[4762]: I0308 00:26:01.928044 4762 ???:1] "http: TLS handshake error from 192.168.126.11:58590: no serving certificate available for the kubelet" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.159486 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.172233 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.203807 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.204160 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83fd1189-6cde-451a-8c08-18acd7921342" containerName="controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.204184 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="83fd1189-6cde-451a-8c08-18acd7921342" containerName="controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.204226 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerName="route-controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.204243 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerName="route-controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.204425 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="83fd1189-6cde-451a-8c08-18acd7921342" containerName="controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.204445 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" containerName="route-controller-manager" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.205065 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.234264 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.288490 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles\") pod \"83fd1189-6cde-451a-8c08-18acd7921342\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289041 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config\") pod \"8de4ba02-a54d-4ffb-a781-267c6a741abe\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289081 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert\") pod \"83fd1189-6cde-451a-8c08-18acd7921342\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289138 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca\") pod \"8de4ba02-a54d-4ffb-a781-267c6a741abe\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289189 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert\") pod \"8de4ba02-a54d-4ffb-a781-267c6a741abe\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289219 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config\") pod \"83fd1189-6cde-451a-8c08-18acd7921342\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289254 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca\") pod \"83fd1189-6cde-451a-8c08-18acd7921342\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289301 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txm7p\" (UniqueName: \"kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p\") pod \"8de4ba02-a54d-4ffb-a781-267c6a741abe\" (UID: \"8de4ba02-a54d-4ffb-a781-267c6a741abe\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289321 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x829m\" (UniqueName: \"kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m\") pod \"83fd1189-6cde-451a-8c08-18acd7921342\" (UID: \"83fd1189-6cde-451a-8c08-18acd7921342\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289585 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "83fd1189-6cde-451a-8c08-18acd7921342" (UID: "83fd1189-6cde-451a-8c08-18acd7921342"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.289989 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config" (OuterVolumeSpecName: "config") pod "8de4ba02-a54d-4ffb-a781-267c6a741abe" (UID: "8de4ba02-a54d-4ffb-a781-267c6a741abe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.290452 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca" (OuterVolumeSpecName: "client-ca") pod "83fd1189-6cde-451a-8c08-18acd7921342" (UID: "83fd1189-6cde-451a-8c08-18acd7921342"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.290578 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config" (OuterVolumeSpecName: "config") pod "83fd1189-6cde-451a-8c08-18acd7921342" (UID: "83fd1189-6cde-451a-8c08-18acd7921342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.290702 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca" (OuterVolumeSpecName: "client-ca") pod "8de4ba02-a54d-4ffb-a781-267c6a741abe" (UID: "8de4ba02-a54d-4ffb-a781-267c6a741abe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.295886 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "83fd1189-6cde-451a-8c08-18acd7921342" (UID: "83fd1189-6cde-451a-8c08-18acd7921342"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.297071 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8de4ba02-a54d-4ffb-a781-267c6a741abe" (UID: "8de4ba02-a54d-4ffb-a781-267c6a741abe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.297678 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p" (OuterVolumeSpecName: "kube-api-access-txm7p") pod "8de4ba02-a54d-4ffb-a781-267c6a741abe" (UID: "8de4ba02-a54d-4ffb-a781-267c6a741abe"). InnerVolumeSpecName "kube-api-access-txm7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.297788 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m" (OuterVolumeSpecName: "kube-api-access-x829m") pod "83fd1189-6cde-451a-8c08-18acd7921342" (UID: "83fd1189-6cde-451a-8c08-18acd7921342"). InnerVolumeSpecName "kube-api-access-x829m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.346386 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.391207 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.391355 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.391429 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.391524 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2j9x\" (UniqueName: \"kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392067 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8de4ba02-a54d-4ffb-a781-267c6a741abe-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392108 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392136 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392242 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txm7p\" (UniqueName: \"kubernetes.io/projected/8de4ba02-a54d-4ffb-a781-267c6a741abe-kube-api-access-txm7p\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392258 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x829m\" (UniqueName: \"kubernetes.io/projected/83fd1189-6cde-451a-8c08-18acd7921342-kube-api-access-x829m\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392272 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/83fd1189-6cde-451a-8c08-18acd7921342-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392285 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392296 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/83fd1189-6cde-451a-8c08-18acd7921342-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.392307 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8de4ba02-a54d-4ffb-a781-267c6a741abe-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.422649 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq"] Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.422928 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="extract-utilities" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.422945 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="extract-utilities" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.422956 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="registry-server" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.422963 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="registry-server" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.422982 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="extract-content" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.422991 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="extract-content" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.423100 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerName="registry-server" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.423510 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.465155 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.465288 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.465621 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.465708 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.471081 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.471029 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.472400 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.472647 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.474167 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.474407 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.474857 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.475007 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.489092 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.493079 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfxtk\" (UniqueName: \"kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk\") pod \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.495349 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.493401 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content\") pod \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496212 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities\") pod \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\" (UID: \"2b14b4da-20cb-4559-9d3b-007f0f76ae72\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496363 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496408 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496432 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496463 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496508 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-policies\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496560 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496604 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j9x\" (UniqueName: \"kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496631 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-login\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496668 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njbrk\" (UniqueName: \"kubernetes.io/projected/2e89b5ad-4281-471d-a5c5-55a2351a9cab-kube-api-access-njbrk\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496703 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496725 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496836 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-dir\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496887 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-error\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496924 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-session\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.496971 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-router-certs\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.497022 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.497044 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-service-ca\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.497086 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.498015 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.498122 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities" (OuterVolumeSpecName: "utilities") pod "2b14b4da-20cb-4559-9d3b-007f0f76ae72" (UID: "2b14b4da-20cb-4559-9d3b-007f0f76ae72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.498962 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.499067 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.499538 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.499811 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk" (OuterVolumeSpecName: "kube-api-access-cfxtk") pod "2b14b4da-20cb-4559-9d3b-007f0f76ae72" (UID: "2b14b4da-20cb-4559-9d3b-007f0f76ae72"). InnerVolumeSpecName "kube-api-access-cfxtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.508024 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.517662 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j9x\" (UniqueName: \"kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x\") pod \"route-controller-manager-749d64fd6d-qsb8p\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.524653 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598677 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-login\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598751 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njbrk\" (UniqueName: \"kubernetes.io/projected/2e89b5ad-4281-471d-a5c5-55a2351a9cab-kube-api-access-njbrk\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598842 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598892 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598956 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-dir\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.598980 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-error\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599016 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-session\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599041 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-router-certs\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599084 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-service-ca\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599152 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599193 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599227 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-policies\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599271 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599339 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfxtk\" (UniqueName: \"kubernetes.io/projected/2b14b4da-20cb-4559-9d3b-007f0f76ae72-kube-api-access-cfxtk\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.599360 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.600257 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-service-ca\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.600641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.600683 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.601295 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-policies\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.601503 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2e89b5ad-4281-471d-a5c5-55a2351a9cab-audit-dir\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.604088 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-login\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.604217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-session\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.604474 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.605281 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-router-certs\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.605592 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-error\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.618645 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njbrk\" (UniqueName: \"kubernetes.io/projected/2e89b5ad-4281-471d-a5c5-55a2351a9cab-kube-api-access-njbrk\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.620411 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.620471 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.620658 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2e89b5ad-4281-471d-a5c5-55a2351a9cab-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9c9dfc54c-9qcbq\" (UID: \"2e89b5ad-4281-471d-a5c5-55a2351a9cab\") " pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.632197 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.632197 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4bb595c4-x62ll" event={"ID":"83fd1189-6cde-451a-8c08-18acd7921342","Type":"ContainerDied","Data":"f484163fb5aab00925f58be30c57da8435fc767c27af0670de83661ea1429071"} Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.632275 4762 scope.go:117] "RemoveContainer" containerID="2a3f947cc43bf1ddd4ed4d5b64a50b4ec84d66babc94d50c267cb6eab4a02eb7" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.634741 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" event={"ID":"8de4ba02-a54d-4ffb-a781-267c6a741abe","Type":"ContainerDied","Data":"20ca4f75465887dab3c562649e862663de78f35cf4ca4306d83b2c2a03fd9b19"} Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.634847 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.639043 4762 generic.go:334] "Generic (PLEG): container finished" podID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" containerID="751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2" exitCode=0 Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.639144 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerDied","Data":"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2"} Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.639209 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g7kzd" event={"ID":"2b14b4da-20cb-4559-9d3b-007f0f76ae72","Type":"ContainerDied","Data":"134802603ff4abacd7b220d039cc531ebab9e8e4738ce478483eaf61ec066059"} Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.639206 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g7kzd" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.645675 4762 generic.go:334] "Generic (PLEG): container finished" podID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerID="30480bd761de474c85ccb5d3a9140f25f5375622ad707eda2f845d1fee55d32c" exitCode=0 Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.646160 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerDied","Data":"30480bd761de474c85ccb5d3a9140f25f5375622ad707eda2f845d1fee55d32c"} Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.663860 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.665202 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.669174 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d4bb595c4-x62ll"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.673916 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.675030 4762 scope.go:117] "RemoveContainer" containerID="7235c25da98b7ff58a4ed662356d46d92208fd90aac12703cffdc723bc945747" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.677502 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c9ccb858b-blnmz"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.681546 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b14b4da-20cb-4559-9d3b-007f0f76ae72" (UID: "2b14b4da-20cb-4559-9d3b-007f0f76ae72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.700443 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content\") pod \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.700791 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b14b4da-20cb-4559-9d3b-007f0f76ae72-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.707052 4762 scope.go:117] "RemoveContainer" containerID="751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.723312 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.730970 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" (UID: "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.739503 4762 scope.go:117] "RemoveContainer" containerID="c2123713275486e7309a836f890300537812d275a713049177980929b010f374" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.764180 4762 scope.go:117] "RemoveContainer" containerID="97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.780784 4762 scope.go:117] "RemoveContainer" containerID="751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.781270 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2\": container with ID starting with 751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2 not found: ID does not exist" containerID="751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.781331 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2"} err="failed to get container status \"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2\": rpc error: code = NotFound desc = could not find container \"751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2\": container with ID starting with 751505370b02ff8f193bce741bb9e1f87c585ffd338b17908edf06f91b91cef2 not found: ID does not exist" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.781367 4762 scope.go:117] "RemoveContainer" containerID="c2123713275486e7309a836f890300537812d275a713049177980929b010f374" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.781871 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2123713275486e7309a836f890300537812d275a713049177980929b010f374\": container with ID starting with c2123713275486e7309a836f890300537812d275a713049177980929b010f374 not found: ID does not exist" containerID="c2123713275486e7309a836f890300537812d275a713049177980929b010f374" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.781913 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2123713275486e7309a836f890300537812d275a713049177980929b010f374"} err="failed to get container status \"c2123713275486e7309a836f890300537812d275a713049177980929b010f374\": rpc error: code = NotFound desc = could not find container \"c2123713275486e7309a836f890300537812d275a713049177980929b010f374\": container with ID starting with c2123713275486e7309a836f890300537812d275a713049177980929b010f374 not found: ID does not exist" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.781951 4762 scope.go:117] "RemoveContainer" containerID="97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb" Mar 08 00:26:02 crc kubenswrapper[4762]: E0308 00:26:02.782328 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb\": container with ID starting with 97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb not found: ID does not exist" containerID="97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.782383 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb"} err="failed to get container status \"97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb\": rpc error: code = NotFound desc = could not find container \"97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb\": container with ID starting with 97f10937433dda76ee4a69c5360ec7e1b2c3d5dd9dd2d4c254289486305387fb not found: ID does not exist" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.785842 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.801815 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h62np\" (UniqueName: \"kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np\") pod \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.802145 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities\") pod \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\" (UID: \"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b\") " Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.802856 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.802962 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities" (OuterVolumeSpecName: "utilities") pod "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" (UID: "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.804687 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np" (OuterVolumeSpecName: "kube-api-access-h62np") pod "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" (UID: "1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b"). InnerVolumeSpecName "kube-api-access-h62np". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.903465 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.903521 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h62np\" (UniqueName: \"kubernetes.io/projected/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b-kube-api-access-h62np\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.967307 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:26:02 crc kubenswrapper[4762]: I0308 00:26:02.971029 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g7kzd"] Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.198168 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq"] Mar 08 00:26:03 crc kubenswrapper[4762]: W0308 00:26:03.204194 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e89b5ad_4281_471d_a5c5_55a2351a9cab.slice/crio-2973449244a82ffa903db3d799c74c651803aa7131136670d831301efcebab36 WatchSource:0}: Error finding container 2973449244a82ffa903db3d799c74c651803aa7131136670d831301efcebab36: Status 404 returned error can't find the container with id 2973449244a82ffa903db3d799c74c651803aa7131136670d831301efcebab36 Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.269128 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b14b4da-20cb-4559-9d3b-007f0f76ae72" path="/var/lib/kubelet/pods/2b14b4da-20cb-4559-9d3b-007f0f76ae72/volumes" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.269989 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83fd1189-6cde-451a-8c08-18acd7921342" path="/var/lib/kubelet/pods/83fd1189-6cde-451a-8c08-18acd7921342/volumes" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.270528 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8de4ba02-a54d-4ffb-a781-267c6a741abe" path="/var/lib/kubelet/pods/8de4ba02-a54d-4ffb-a781-267c6a741abe/volumes" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.659330 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k8bhh" event={"ID":"1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b","Type":"ContainerDied","Data":"8ebcf030a6291365d0b5a4b18b4f695fceb60bde46a2ea6fb819c695cb25a4d5"} Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.659390 4762 scope.go:117] "RemoveContainer" containerID="30480bd761de474c85ccb5d3a9140f25f5375622ad707eda2f845d1fee55d32c" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.659412 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k8bhh" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.663850 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" event={"ID":"2e89b5ad-4281-471d-a5c5-55a2351a9cab","Type":"ContainerStarted","Data":"77eaabcbc693bcc171b95666032866ca1103633ba6fb9bcb20b9c9efe19a804a"} Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.663877 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" event={"ID":"2e89b5ad-4281-471d-a5c5-55a2351a9cab","Type":"ContainerStarted","Data":"2973449244a82ffa903db3d799c74c651803aa7131136670d831301efcebab36"} Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.664447 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.665696 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" event={"ID":"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d","Type":"ContainerStarted","Data":"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2"} Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.665722 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" event={"ID":"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d","Type":"ContainerStarted","Data":"257b7446531697457b4f76f217b32c39ebd267c393ba94def78d76a5ec51e56a"} Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.666513 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.672510 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.676859 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.679427 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k8bhh"] Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.681570 4762 scope.go:117] "RemoveContainer" containerID="bc353fb9bef87ed40ef7218999b5f8250966642cb909d15a46905980b00c5ced" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.705372 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podStartSLOduration=30.705348357 podStartE2EDuration="30.705348357s" podCreationTimestamp="2026-03-08 00:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:03.69864432 +0000 UTC m=+185.172788674" watchObservedRunningTime="2026-03-08 00:26:03.705348357 +0000 UTC m=+185.179492701" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.730988 4762 scope.go:117] "RemoveContainer" containerID="5cd9084f379c29feb9598638daabdff9a8d3ed92e4c7a4d44b93643a8b039d86" Mar 08 00:26:03 crc kubenswrapper[4762]: I0308 00:26:03.734907 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" podStartSLOduration=3.734884046 podStartE2EDuration="3.734884046s" podCreationTimestamp="2026-03-08 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:03.723961018 +0000 UTC m=+185.198105362" watchObservedRunningTime="2026-03-08 00:26:03.734884046 +0000 UTC m=+185.209028390" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.150131 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.419441 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:04 crc kubenswrapper[4762]: E0308 00:26:04.419674 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="extract-content" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.419687 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="extract-content" Mar 08 00:26:04 crc kubenswrapper[4762]: E0308 00:26:04.419698 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="extract-utilities" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.419704 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="extract-utilities" Mar 08 00:26:04 crc kubenswrapper[4762]: E0308 00:26:04.419715 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="registry-server" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.419722 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="registry-server" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.419829 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" containerName="registry-server" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.420173 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.423799 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.423841 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.423799 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.424390 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.424676 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.425846 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.436579 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.436819 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.436923 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.436953 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.436989 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.437223 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c2d\" (UniqueName: \"kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.437802 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.538565 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.538657 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.538682 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.538703 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.538778 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c2d\" (UniqueName: \"kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.540474 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.540632 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.541322 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.547928 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.561265 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c2d\" (UniqueName: \"kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d\") pod \"controller-manager-b74947848-qnl6n\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:04 crc kubenswrapper[4762]: I0308 00:26:04.740449 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:05 crc kubenswrapper[4762]: I0308 00:26:05.243697 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:26:05 crc kubenswrapper[4762]: I0308 00:26:05.274571 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b" path="/var/lib/kubelet/pods/1f99b9f4-d6bf-4cc3-98d5-e30c34c7295b/volumes" Mar 08 00:26:05 crc kubenswrapper[4762]: I0308 00:26:05.573817 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:26:05 crc kubenswrapper[4762]: I0308 00:26:05.786149 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:26:05 crc kubenswrapper[4762]: I0308 00:26:05.848275 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.308931 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.310168 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x7tm2" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="registry-server" containerID="cri-o://0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937" gracePeriod=2 Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.569796 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.648434 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.724055 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" event={"ID":"62001691-4a9a-45c4-8f19-e9a3dbf1812d","Type":"ContainerStarted","Data":"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6"} Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.724109 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" event={"ID":"62001691-4a9a-45c4-8f19-e9a3dbf1812d","Type":"ContainerStarted","Data":"78721b4f29f9aa0209909dbfb0d571aae8fa6b4e98bcb14b367a00547877dd34"} Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.724502 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.725714 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" event={"ID":"93accc2a-5975-4e5f-8927-264224130aca","Type":"ContainerStarted","Data":"4e9d10f45a83942d86c72ff918c3b6b3a4773a81038463214e0bc0d25c26146e"} Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.728449 4762 generic.go:334] "Generic (PLEG): container finished" podID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerID="0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937" exitCode=0 Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.728507 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerDied","Data":"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937"} Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.728532 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x7tm2" event={"ID":"82abd8f0-adc8-4094-a833-073e1cc68f50","Type":"ContainerDied","Data":"2c53242d8df303527b26738a590752b3e98d4e5d33daad64a5d706dfca381072"} Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.728549 4762 scope.go:117] "RemoveContainer" containerID="0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.728668 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x7tm2" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.737687 4762 patch_prober.go:28] interesting pod/controller-manager-b74947848-qnl6n container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.737955 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.755123 4762 scope.go:117] "RemoveContainer" containerID="4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.761924 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" podStartSLOduration=1.511997296 podStartE2EDuration="8.761906225s" podCreationTimestamp="2026-03-08 00:26:00 +0000 UTC" firstStartedPulling="2026-03-08 00:26:01.023110991 +0000 UTC m=+182.497255355" lastFinishedPulling="2026-03-08 00:26:08.27301994 +0000 UTC m=+189.747164284" observedRunningTime="2026-03-08 00:26:08.761703278 +0000 UTC m=+190.235847642" watchObservedRunningTime="2026-03-08 00:26:08.761906225 +0000 UTC m=+190.236050569" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.764226 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" podStartSLOduration=8.764217914 podStartE2EDuration="8.764217914s" podCreationTimestamp="2026-03-08 00:26:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:08.745507129 +0000 UTC m=+190.219651483" watchObservedRunningTime="2026-03-08 00:26:08.764217914 +0000 UTC m=+190.238362258" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.777640 4762 scope.go:117] "RemoveContainer" containerID="06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.789096 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content\") pod \"82abd8f0-adc8-4094-a833-073e1cc68f50\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.789214 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvj22\" (UniqueName: \"kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22\") pod \"82abd8f0-adc8-4094-a833-073e1cc68f50\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.789251 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities\") pod \"82abd8f0-adc8-4094-a833-073e1cc68f50\" (UID: \"82abd8f0-adc8-4094-a833-073e1cc68f50\") " Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.790840 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities" (OuterVolumeSpecName: "utilities") pod "82abd8f0-adc8-4094-a833-073e1cc68f50" (UID: "82abd8f0-adc8-4094-a833-073e1cc68f50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.799911 4762 scope.go:117] "RemoveContainer" containerID="0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937" Mar 08 00:26:08 crc kubenswrapper[4762]: E0308 00:26:08.800790 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937\": container with ID starting with 0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937 not found: ID does not exist" containerID="0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.800860 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937"} err="failed to get container status \"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937\": rpc error: code = NotFound desc = could not find container \"0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937\": container with ID starting with 0fe63cc4c723a89aa43b98bbd972921b83a06c215c305bb0eecf60ea51d58937 not found: ID does not exist" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.800897 4762 scope.go:117] "RemoveContainer" containerID="4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.800913 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22" (OuterVolumeSpecName: "kube-api-access-fvj22") pod "82abd8f0-adc8-4094-a833-073e1cc68f50" (UID: "82abd8f0-adc8-4094-a833-073e1cc68f50"). InnerVolumeSpecName "kube-api-access-fvj22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:08 crc kubenswrapper[4762]: E0308 00:26:08.801227 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb\": container with ID starting with 4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb not found: ID does not exist" containerID="4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.801297 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb"} err="failed to get container status \"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb\": rpc error: code = NotFound desc = could not find container \"4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb\": container with ID starting with 4d10276cebc95dd8209dce16c538620703bc40477e51d7a5354fcbc59b0368cb not found: ID does not exist" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.801326 4762 scope.go:117] "RemoveContainer" containerID="06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53" Mar 08 00:26:08 crc kubenswrapper[4762]: E0308 00:26:08.801597 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53\": container with ID starting with 06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53 not found: ID does not exist" containerID="06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.801640 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53"} err="failed to get container status \"06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53\": rpc error: code = NotFound desc = could not find container \"06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53\": container with ID starting with 06ad7787d015a0179d7dadede52b0e5ca9ce43480447305d1b723116e8b47c53 not found: ID does not exist" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.850287 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82abd8f0-adc8-4094-a833-073e1cc68f50" (UID: "82abd8f0-adc8-4094-a833-073e1cc68f50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.890925 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.890960 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82abd8f0-adc8-4094-a833-073e1cc68f50-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.890973 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvj22\" (UniqueName: \"kubernetes.io/projected/82abd8f0-adc8-4094-a833-073e1cc68f50-kube-api-access-fvj22\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:08 crc kubenswrapper[4762]: I0308 00:26:08.996567 4762 csr.go:261] certificate signing request csr-lcqsq is approved, waiting to be issued Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.007268 4762 csr.go:257] certificate signing request csr-lcqsq is issued Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.113794 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.118304 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x7tm2"] Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.270713 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" path="/var/lib/kubelet/pods/82abd8f0-adc8-4094-a833-073e1cc68f50/volumes" Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.740494 4762 generic.go:334] "Generic (PLEG): container finished" podID="93accc2a-5975-4e5f-8927-264224130aca" containerID="4e9d10f45a83942d86c72ff918c3b6b3a4773a81038463214e0bc0d25c26146e" exitCode=0 Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.740646 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" event={"ID":"93accc2a-5975-4e5f-8927-264224130aca","Type":"ContainerDied","Data":"4e9d10f45a83942d86c72ff918c3b6b3a4773a81038463214e0bc0d25c26146e"} Mar 08 00:26:09 crc kubenswrapper[4762]: I0308 00:26:09.747978 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:10 crc kubenswrapper[4762]: I0308 00:26:10.008444 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-12 17:42:25.471685031 +0000 UTC Mar 08 00:26:10 crc kubenswrapper[4762]: I0308 00:26:10.008851 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 5993h16m15.462839494s for next certificate rotation Mar 08 00:26:10 crc kubenswrapper[4762]: I0308 00:26:10.284697 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.009524 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-17 07:37:37.82940157 +0000 UTC Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.009583 4762 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6103h11m26.819823826s for next certificate rotation Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.172971 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.328152 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxmdr\" (UniqueName: \"kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr\") pod \"93accc2a-5975-4e5f-8927-264224130aca\" (UID: \"93accc2a-5975-4e5f-8927-264224130aca\") " Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.334274 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr" (OuterVolumeSpecName: "kube-api-access-lxmdr") pod "93accc2a-5975-4e5f-8927-264224130aca" (UID: "93accc2a-5975-4e5f-8927-264224130aca"). InnerVolumeSpecName "kube-api-access-lxmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.429892 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxmdr\" (UniqueName: \"kubernetes.io/projected/93accc2a-5975-4e5f-8927-264224130aca-kube-api-access-lxmdr\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.763137 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" event={"ID":"93accc2a-5975-4e5f-8927-264224130aca","Type":"ContainerDied","Data":"47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4"} Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.763188 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47cbfb970c2b7ab17cb4461b5bc59b3029bb551507eeae3a28216a6a26026fe4" Mar 08 00:26:11 crc kubenswrapper[4762]: I0308 00:26:11.763248 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548826-5sjq2" Mar 08 00:26:20 crc kubenswrapper[4762]: I0308 00:26:20.949595 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:20 crc kubenswrapper[4762]: I0308 00:26:20.950202 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerName="controller-manager" containerID="cri-o://b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6" gracePeriod=30 Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.048866 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.049121 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" podUID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" containerName="route-controller-manager" containerID="cri-o://d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2" gracePeriod=30 Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.581889 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.585454 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.589243 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert\") pod \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.589341 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config\") pod \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590176 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca" (OuterVolumeSpecName: "client-ca") pod "62001691-4a9a-45c4-8f19-e9a3dbf1812d" (UID: "62001691-4a9a-45c4-8f19-e9a3dbf1812d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590461 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config" (OuterVolumeSpecName: "config") pod "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" (UID: "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.589377 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca\") pod \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590610 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config\") pod \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590691 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert\") pod \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590741 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles\") pod \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590806 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89c2d\" (UniqueName: \"kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d\") pod \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\" (UID: \"62001691-4a9a-45c4-8f19-e9a3dbf1812d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590864 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca\") pod \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.590899 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2j9x\" (UniqueName: \"kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x\") pod \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\" (UID: \"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d\") " Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591107 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config" (OuterVolumeSpecName: "config") pod "62001691-4a9a-45c4-8f19-e9a3dbf1812d" (UID: "62001691-4a9a-45c4-8f19-e9a3dbf1812d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591334 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" (UID: "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591334 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "62001691-4a9a-45c4-8f19-e9a3dbf1812d" (UID: "62001691-4a9a-45c4-8f19-e9a3dbf1812d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591849 4762 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591877 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591891 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591905 4762 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.591917 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62001691-4a9a-45c4-8f19-e9a3dbf1812d-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.598849 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" (UID: "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.598875 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x" (OuterVolumeSpecName: "kube-api-access-s2j9x") pod "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" (UID: "7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d"). InnerVolumeSpecName "kube-api-access-s2j9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.598927 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d" (OuterVolumeSpecName: "kube-api-access-89c2d") pod "62001691-4a9a-45c4-8f19-e9a3dbf1812d" (UID: "62001691-4a9a-45c4-8f19-e9a3dbf1812d"). InnerVolumeSpecName "kube-api-access-89c2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.598944 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "62001691-4a9a-45c4-8f19-e9a3dbf1812d" (UID: "62001691-4a9a-45c4-8f19-e9a3dbf1812d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.693320 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.693359 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89c2d\" (UniqueName: \"kubernetes.io/projected/62001691-4a9a-45c4-8f19-e9a3dbf1812d-kube-api-access-89c2d\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.693371 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2j9x\" (UniqueName: \"kubernetes.io/projected/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d-kube-api-access-s2j9x\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.693382 4762 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62001691-4a9a-45c4-8f19-e9a3dbf1812d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.854015 4762 generic.go:334] "Generic (PLEG): container finished" podID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" containerID="d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2" exitCode=0 Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.854123 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" event={"ID":"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d","Type":"ContainerDied","Data":"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2"} Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.854500 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" event={"ID":"7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d","Type":"ContainerDied","Data":"257b7446531697457b4f76f217b32c39ebd267c393ba94def78d76a5ec51e56a"} Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.854534 4762 scope.go:117] "RemoveContainer" containerID="d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.854147 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.856810 4762 generic.go:334] "Generic (PLEG): container finished" podID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerID="b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6" exitCode=0 Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.856848 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" event={"ID":"62001691-4a9a-45c4-8f19-e9a3dbf1812d","Type":"ContainerDied","Data":"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6"} Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.856875 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" event={"ID":"62001691-4a9a-45c4-8f19-e9a3dbf1812d","Type":"ContainerDied","Data":"78721b4f29f9aa0209909dbfb0d571aae8fa6b4e98bcb14b367a00547877dd34"} Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.856898 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-b74947848-qnl6n" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.884106 4762 scope.go:117] "RemoveContainer" containerID="d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2" Mar 08 00:26:21 crc kubenswrapper[4762]: E0308 00:26:21.884558 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2\": container with ID starting with d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2 not found: ID does not exist" containerID="d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.884589 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2"} err="failed to get container status \"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2\": rpc error: code = NotFound desc = could not find container \"d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2\": container with ID starting with d4df9a123d165c1d10813231ab842008fdfa882803ffd4933d9d7707d13cb6e2 not found: ID does not exist" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.884610 4762 scope.go:117] "RemoveContainer" containerID="b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.891824 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.902442 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-b74947848-qnl6n"] Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.907235 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.910487 4762 scope.go:117] "RemoveContainer" containerID="b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6" Mar 08 00:26:21 crc kubenswrapper[4762]: E0308 00:26:21.911186 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6\": container with ID starting with b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6 not found: ID does not exist" containerID="b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.911312 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6"} err="failed to get container status \"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6\": rpc error: code = NotFound desc = could not find container \"b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6\": container with ID starting with b106ddbcf514b40c86da6e3aeea23c8692ca10d94c659d78b9ef3c14f51910e6 not found: ID does not exist" Mar 08 00:26:21 crc kubenswrapper[4762]: I0308 00:26:21.920548 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-749d64fd6d-qsb8p"] Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440366 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s"] Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440702 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="extract-content" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440720 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="extract-content" Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440741 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="registry-server" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440749 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="registry-server" Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440780 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerName="controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440790 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerName="controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440805 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="extract-utilities" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440814 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="extract-utilities" Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440828 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93accc2a-5975-4e5f-8927-264224130aca" containerName="oc" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440835 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="93accc2a-5975-4e5f-8927-264224130aca" containerName="oc" Mar 08 00:26:22 crc kubenswrapper[4762]: E0308 00:26:22.440847 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" containerName="route-controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440857 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" containerName="route-controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440978 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="82abd8f0-adc8-4094-a833-073e1cc68f50" containerName="registry-server" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.440991 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" containerName="controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.441011 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" containerName="route-controller-manager" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.441022 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="93accc2a-5975-4e5f-8927-264224130aca" containerName="oc" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.441417 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68c69b75b4-sfmfl"] Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.441992 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.442117 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.444491 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.445668 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.445804 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.448661 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.448706 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.450364 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.450547 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.450713 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.452132 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.452195 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.452295 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.452794 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.456890 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c69b75b4-sfmfl"] Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.459827 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.461171 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s"] Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.502447 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-config\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.502666 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3e6f1d-92e9-411e-a724-03fea1fc802b-serving-cert\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.502831 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mwtf\" (UniqueName: \"kubernetes.io/projected/334f5d4e-935b-42ba-b77f-2e501853fef8-kube-api-access-7mwtf\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.502953 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-proxy-ca-bundles\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.503063 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-client-ca\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.503193 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-config\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.503298 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-client-ca\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.503427 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334f5d4e-935b-42ba-b77f-2e501853fef8-serving-cert\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.503550 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvn2h\" (UniqueName: \"kubernetes.io/projected/4e3e6f1d-92e9-411e-a724-03fea1fc802b-kube-api-access-tvn2h\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.605284 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-proxy-ca-bundles\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.605364 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-client-ca\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.605459 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-config\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.605498 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-client-ca\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.605549 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334f5d4e-935b-42ba-b77f-2e501853fef8-serving-cert\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.609130 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-client-ca\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.609868 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-client-ca\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610025 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvn2h\" (UniqueName: \"kubernetes.io/projected/4e3e6f1d-92e9-411e-a724-03fea1fc802b-kube-api-access-tvn2h\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610250 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-proxy-ca-bundles\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610244 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e3e6f1d-92e9-411e-a724-03fea1fc802b-config\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610418 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-config\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610619 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3e6f1d-92e9-411e-a724-03fea1fc802b-serving-cert\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.610745 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mwtf\" (UniqueName: \"kubernetes.io/projected/334f5d4e-935b-42ba-b77f-2e501853fef8-kube-api-access-7mwtf\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.612167 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334f5d4e-935b-42ba-b77f-2e501853fef8-config\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.617215 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334f5d4e-935b-42ba-b77f-2e501853fef8-serving-cert\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.628171 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e3e6f1d-92e9-411e-a724-03fea1fc802b-serving-cert\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.641176 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mwtf\" (UniqueName: \"kubernetes.io/projected/334f5d4e-935b-42ba-b77f-2e501853fef8-kube-api-access-7mwtf\") pod \"controller-manager-68c69b75b4-sfmfl\" (UID: \"334f5d4e-935b-42ba-b77f-2e501853fef8\") " pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.649692 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvn2h\" (UniqueName: \"kubernetes.io/projected/4e3e6f1d-92e9-411e-a724-03fea1fc802b-kube-api-access-tvn2h\") pod \"route-controller-manager-777f6d5845-rfx2s\" (UID: \"4e3e6f1d-92e9-411e-a724-03fea1fc802b\") " pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.781233 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:22 crc kubenswrapper[4762]: I0308 00:26:22.800875 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.036608 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68c69b75b4-sfmfl"] Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.278300 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62001691-4a9a-45c4-8f19-e9a3dbf1812d" path="/var/lib/kubelet/pods/62001691-4a9a-45c4-8f19-e9a3dbf1812d/volumes" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.279072 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d" path="/var/lib/kubelet/pods/7f2d4d7c-c0a7-43e1-95e2-d94cf0991c6d/volumes" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.352826 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s"] Mar 08 00:26:23 crc kubenswrapper[4762]: W0308 00:26:23.363117 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e3e6f1d_92e9_411e_a724_03fea1fc802b.slice/crio-d322ba971b49720300387b43ec443c0b11b51226faac8432b54cdc82426e71cb WatchSource:0}: Error finding container d322ba971b49720300387b43ec443c0b11b51226faac8432b54cdc82426e71cb: Status 404 returned error can't find the container with id d322ba971b49720300387b43ec443c0b11b51226faac8432b54cdc82426e71cb Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.885523 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" event={"ID":"4e3e6f1d-92e9-411e-a724-03fea1fc802b","Type":"ContainerStarted","Data":"b22e9652e34ae45c89cdc44e52741809be38ad85dfcc3388c69148816796b18c"} Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.886085 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" event={"ID":"4e3e6f1d-92e9-411e-a724-03fea1fc802b","Type":"ContainerStarted","Data":"d322ba971b49720300387b43ec443c0b11b51226faac8432b54cdc82426e71cb"} Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.886960 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.890732 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" event={"ID":"334f5d4e-935b-42ba-b77f-2e501853fef8","Type":"ContainerStarted","Data":"172e605acd62cb4c17cde7477c69eef93933f05805e426c78f8ff6fa71304cfa"} Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.890785 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" event={"ID":"334f5d4e-935b-42ba-b77f-2e501853fef8","Type":"ContainerStarted","Data":"f878459f34104c93e875354557f7e5d57f6b2a3707ce2074c5c6b2e5857a1748"} Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.891029 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.898513 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.905902 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podStartSLOduration=2.905882635 podStartE2EDuration="2.905882635s" podCreationTimestamp="2026-03-08 00:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:23.901437578 +0000 UTC m=+205.375581972" watchObservedRunningTime="2026-03-08 00:26:23.905882635 +0000 UTC m=+205.380026979" Mar 08 00:26:23 crc kubenswrapper[4762]: I0308 00:26:23.928297 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" podStartSLOduration=3.928279987 podStartE2EDuration="3.928279987s" podCreationTimestamp="2026-03-08 00:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:23.925162084 +0000 UTC m=+205.399306438" watchObservedRunningTime="2026-03-08 00:26:23.928279987 +0000 UTC m=+205.402424331" Mar 08 00:26:24 crc kubenswrapper[4762]: I0308 00:26:24.164558 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.301813 4762 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.302917 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.304920 4762 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.305419 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32" gracePeriod=15 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.305587 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc" gracePeriod=15 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.305636 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e" gracePeriod=15 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.305577 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8" gracePeriod=15 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.307557 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08" gracePeriod=15 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.309697 4762 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.310232 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.310395 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.310583 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.310719 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.310921 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.311044 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.311172 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.311289 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.311428 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.311554 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.311679 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.311835 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.312007 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.312129 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.312358 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.312492 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.312842 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.313750 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.313906 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.314035 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.314190 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.314331 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.314450 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.314806 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.314943 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.315070 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.315205 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.315512 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.315673 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.365166 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.388708 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.388817 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.388869 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.388945 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.389032 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.389076 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.389107 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.389248 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.460913 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:26:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:26:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:26:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-08T00:26:27Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.461892 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.462369 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.462730 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.463073 4762 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.463109 4762 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491321 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491393 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491418 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491431 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491490 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491508 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491543 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491546 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491569 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491562 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491618 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491617 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491652 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491662 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491654 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.491731 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.653313 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:26:27 crc kubenswrapper[4762]: W0308 00:26:27.696021 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-63b8375ecff374048ea7a474028c1be8def5fc17687779a0d5cd1c0a77fedb3d WatchSource:0}: Error finding container 63b8375ecff374048ea7a474028c1be8def5fc17687779a0d5cd1c0a77fedb3d: Status 404 returned error can't find the container with id 63b8375ecff374048ea7a474028c1be8def5fc17687779a0d5cd1c0a77fedb3d Mar 08 00:26:27 crc kubenswrapper[4762]: E0308 00:26:27.699556 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ab6110e924017 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,LastTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.919396 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"63b8375ecff374048ea7a474028c1be8def5fc17687779a0d5cd1c0a77fedb3d"} Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.923041 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.924835 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.926077 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08" exitCode=0 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.926118 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8" exitCode=0 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.926136 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc" exitCode=0 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.926152 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e" exitCode=2 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.926242 4762 scope.go:117] "RemoveContainer" containerID="74b9b7421d249b05969f5659f8fcc6704fa3d446068ea33ac092a8d87e958784" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.929324 4762 generic.go:334] "Generic (PLEG): container finished" podID="f89876a6-46ce-4acd-8078-c41d23a2330e" containerID="76cc49238a18de54dce8f4d8a706d403438e2a9b157bad1db5d3dd155d508cdb" exitCode=0 Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.929384 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f89876a6-46ce-4acd-8078-c41d23a2330e","Type":"ContainerDied","Data":"76cc49238a18de54dce8f4d8a706d403438e2a9b157bad1db5d3dd155d508cdb"} Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.930356 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.930968 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:27 crc kubenswrapper[4762]: I0308 00:26:27.931307 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.384236 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.385175 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.385686 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.386482 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.387267 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.387896 4762 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.389127 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="200ms" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.589843 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="400ms" Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.944526 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.947898 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"990bc585ed8afc9b74fbfd9d199fc06804350d2335ff55d1a5dfbaad265538ca"} Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.949485 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.950505 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: I0308 00:26:28.951107 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:28 crc kubenswrapper[4762]: E0308 00:26:28.991551 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="800ms" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.266049 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.266715 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.267184 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.345567 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.346387 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.346864 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.423904 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir\") pod \"f89876a6-46ce-4acd-8078-c41d23a2330e\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.424508 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock\") pod \"f89876a6-46ce-4acd-8078-c41d23a2330e\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.424559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access\") pod \"f89876a6-46ce-4acd-8078-c41d23a2330e\" (UID: \"f89876a6-46ce-4acd-8078-c41d23a2330e\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.424226 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f89876a6-46ce-4acd-8078-c41d23a2330e" (UID: "f89876a6-46ce-4acd-8078-c41d23a2330e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.425062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock" (OuterVolumeSpecName: "var-lock") pod "f89876a6-46ce-4acd-8078-c41d23a2330e" (UID: "f89876a6-46ce-4acd-8078-c41d23a2330e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.454100 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f89876a6-46ce-4acd-8078-c41d23a2330e" (UID: "f89876a6-46ce-4acd-8078-c41d23a2330e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.525903 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f89876a6-46ce-4acd-8078-c41d23a2330e-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.525940 4762 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.525949 4762 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f89876a6-46ce-4acd-8078-c41d23a2330e-var-lock\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.702846 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.704527 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.705402 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.705791 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.706133 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728158 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728268 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728303 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728374 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728461 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728505 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728908 4762 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728975 4762 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.728993 4762 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:26:29 crc kubenswrapper[4762]: E0308 00:26:29.793054 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="1.6s" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.956585 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f89876a6-46ce-4acd-8078-c41d23a2330e","Type":"ContainerDied","Data":"8f5ee1ce0623b9ba46af3be82606909ccf42d4d06babe3693092a511dec4f574"} Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.956646 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f5ee1ce0623b9ba46af3be82606909ccf42d4d06babe3693092a511dec4f574" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.956664 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.961416 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.962876 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32" exitCode=0 Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.962974 4762 scope.go:117] "RemoveContainer" containerID="1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.963069 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.981159 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.982269 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.982710 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.996165 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.996539 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.996933 4762 scope.go:117] "RemoveContainer" containerID="effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8" Mar 08 00:26:29 crc kubenswrapper[4762]: I0308 00:26:29.997174 4762 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.019466 4762 scope.go:117] "RemoveContainer" containerID="48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.042943 4762 scope.go:117] "RemoveContainer" containerID="d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.065855 4762 scope.go:117] "RemoveContainer" containerID="3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.090887 4762 scope.go:117] "RemoveContainer" containerID="a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.119606 4762 scope.go:117] "RemoveContainer" containerID="1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.120982 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08\": container with ID starting with 1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08 not found: ID does not exist" containerID="1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.121039 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08"} err="failed to get container status \"1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08\": rpc error: code = NotFound desc = could not find container \"1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08\": container with ID starting with 1c8d0214c1e6f782558d85144d804a68b511c473b04f10531860109eb2dd3d08 not found: ID does not exist" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.121080 4762 scope.go:117] "RemoveContainer" containerID="effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.121673 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\": container with ID starting with effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8 not found: ID does not exist" containerID="effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.121750 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8"} err="failed to get container status \"effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\": rpc error: code = NotFound desc = could not find container \"effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8\": container with ID starting with effe0ce868afee75aaa7015247172f04b787df5efd16f7c40cf5da2ea4ee54d8 not found: ID does not exist" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.121826 4762 scope.go:117] "RemoveContainer" containerID="48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.122183 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\": container with ID starting with 48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc not found: ID does not exist" containerID="48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.122213 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc"} err="failed to get container status \"48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\": rpc error: code = NotFound desc = could not find container \"48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc\": container with ID starting with 48b33b9847da473a5e376aa664ce6e50633ea2cefdbae9459d24e2df05a5f2dc not found: ID does not exist" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.122233 4762 scope.go:117] "RemoveContainer" containerID="d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.122571 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\": container with ID starting with d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e not found: ID does not exist" containerID="d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.122625 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e"} err="failed to get container status \"d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\": rpc error: code = NotFound desc = could not find container \"d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e\": container with ID starting with d48877ad5ec0d65f3f03389f88cad2e9b3a922d40991c104e321e16ca74cb32e not found: ID does not exist" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.122673 4762 scope.go:117] "RemoveContainer" containerID="3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.123108 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\": container with ID starting with 3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32 not found: ID does not exist" containerID="3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.123148 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32"} err="failed to get container status \"3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\": rpc error: code = NotFound desc = could not find container \"3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32\": container with ID starting with 3d7d3d588223ceb95b7631b550b764924bb95729d991b47e7236ad9d78307d32 not found: ID does not exist" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.123168 4762 scope.go:117] "RemoveContainer" containerID="a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b" Mar 08 00:26:30 crc kubenswrapper[4762]: E0308 00:26:30.123575 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\": container with ID starting with a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b not found: ID does not exist" containerID="a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b" Mar 08 00:26:30 crc kubenswrapper[4762]: I0308 00:26:30.123604 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b"} err="failed to get container status \"a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\": rpc error: code = NotFound desc = could not find container \"a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b\": container with ID starting with a75ab6dcf07f3ccde5bbce0f4f0afd651b651a2172be84c7fc4d75333a1c584b not found: ID does not exist" Mar 08 00:26:31 crc kubenswrapper[4762]: I0308 00:26:31.274594 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 08 00:26:31 crc kubenswrapper[4762]: E0308 00:26:31.352839 4762 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" volumeName="registry-storage" Mar 08 00:26:31 crc kubenswrapper[4762]: E0308 00:26:31.393587 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="3.2s" Mar 08 00:26:31 crc kubenswrapper[4762]: E0308 00:26:31.533011 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ab6110e924017 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,LastTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:26:34 crc kubenswrapper[4762]: E0308 00:26:34.595489 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="6.4s" Mar 08 00:26:39 crc kubenswrapper[4762]: I0308 00:26:39.268063 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:39 crc kubenswrapper[4762]: I0308 00:26:39.269498 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.045635 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.046671 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.046818 4762 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d7dc43b94483b191269958f97bb05775c3ca2f45710b19afc37d73ea96f44ab0" exitCode=1 Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.046861 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d7dc43b94483b191269958f97bb05775c3ca2f45710b19afc37d73ea96f44ab0"} Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.047544 4762 scope.go:117] "RemoveContainer" containerID="d7dc43b94483b191269958f97bb05775c3ca2f45710b19afc37d73ea96f44ab0" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.048058 4762 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.048707 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:40 crc kubenswrapper[4762]: I0308 00:26:40.049422 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:40 crc kubenswrapper[4762]: E0308 00:26:40.998519 4762 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.196:6443: connect: connection refused" interval="7s" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.060398 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.061170 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.061242 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0"} Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.062902 4762 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.063593 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.064722 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.263360 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.264436 4762 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.265035 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.265729 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.287711 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.287822 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:41 crc kubenswrapper[4762]: E0308 00:26:41.288719 4762 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:41 crc kubenswrapper[4762]: I0308 00:26:41.289439 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:41 crc kubenswrapper[4762]: W0308 00:26:41.320829 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-15ac2f78be11d3b778c99ed4cab18a7fd4e49674e2f3892f2ad711ad7e8d24a5 WatchSource:0}: Error finding container 15ac2f78be11d3b778c99ed4cab18a7fd4e49674e2f3892f2ad711ad7e8d24a5: Status 404 returned error can't find the container with id 15ac2f78be11d3b778c99ed4cab18a7fd4e49674e2f3892f2ad711ad7e8d24a5 Mar 08 00:26:41 crc kubenswrapper[4762]: E0308 00:26:41.534501 4762 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189ab6110e924017 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,LastTimestamp:2026-03-08 00:26:27.698876439 +0000 UTC m=+209.173020823,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.073417 4762 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="135eab34cab2bf1e2b6fc295aee647b200e8f3b1a6fe5d892b8475f7f95d044f" exitCode=0 Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.073552 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"135eab34cab2bf1e2b6fc295aee647b200e8f3b1a6fe5d892b8475f7f95d044f"} Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.073861 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"15ac2f78be11d3b778c99ed4cab18a7fd4e49674e2f3892f2ad711ad7e8d24a5"} Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.074299 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.074324 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.075224 4762 status_manager.go:851] "Failed to get status for pod" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:42 crc kubenswrapper[4762]: E0308 00:26:42.075462 4762 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.075693 4762 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:42 crc kubenswrapper[4762]: I0308 00:26:42.076288 4762 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.196:6443: connect: connection refused" Mar 08 00:26:43 crc kubenswrapper[4762]: I0308 00:26:43.086203 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f95f90658385d4c358e9eac75010bb9b9d4d98233c1db7e98dc2c5f8d61343bb"} Mar 08 00:26:43 crc kubenswrapper[4762]: I0308 00:26:43.086256 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8ec4165434c92498e3378ee3990f4f22665dc9a9a291684a34812342cba80751"} Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.095902 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cbfae8fde3b68700fa186d3dc206505a55211242b7366e1aa5cd6f8ff81e20b7"} Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.096520 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d1175794dab3f6666c9d4cd25d18df66aa3b7e11048d753bac9de03ae3d3be4"} Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.096550 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2880bfa14a131c42f86efca2288ef0d07bc7b2e92df09ccfaa992c34fad8a4d"} Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.096584 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.096292 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:44 crc kubenswrapper[4762]: I0308 00:26:44.096627 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:45 crc kubenswrapper[4762]: I0308 00:26:45.289538 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.290325 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.290416 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.298836 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.598796 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.599105 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 08 00:26:46 crc kubenswrapper[4762]: I0308 00:26:46.599185 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 08 00:26:49 crc kubenswrapper[4762]: I0308 00:26:49.175708 4762 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:49 crc kubenswrapper[4762]: I0308 00:26:49.315197 4762 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="911d1866-fffe-46ec-9d08-52e10bc29888" Mar 08 00:26:50 crc kubenswrapper[4762]: I0308 00:26:50.141679 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:50 crc kubenswrapper[4762]: I0308 00:26:50.141730 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:50 crc kubenswrapper[4762]: I0308 00:26:50.145365 4762 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="911d1866-fffe-46ec-9d08-52e10bc29888" Mar 08 00:26:50 crc kubenswrapper[4762]: I0308 00:26:50.148479 4762 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://8ec4165434c92498e3378ee3990f4f22665dc9a9a291684a34812342cba80751" Mar 08 00:26:50 crc kubenswrapper[4762]: I0308 00:26:50.148516 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:26:51 crc kubenswrapper[4762]: I0308 00:26:51.148273 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:51 crc kubenswrapper[4762]: I0308 00:26:51.148655 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:26:51 crc kubenswrapper[4762]: I0308 00:26:51.153121 4762 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="911d1866-fffe-46ec-9d08-52e10bc29888" Mar 08 00:26:56 crc kubenswrapper[4762]: I0308 00:26:56.599072 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 08 00:26:56 crc kubenswrapper[4762]: I0308 00:26:56.599824 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 08 00:26:58 crc kubenswrapper[4762]: I0308 00:26:58.521834 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 08 00:26:58 crc kubenswrapper[4762]: I0308 00:26:58.731744 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.116546 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.305820 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.613029 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.845117 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.894009 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 08 00:26:59 crc kubenswrapper[4762]: I0308 00:26:59.918317 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.222895 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.304866 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.376073 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.604529 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.897431 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.934276 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 08 00:27:00 crc kubenswrapper[4762]: I0308 00:27:00.978994 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.014915 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.201113 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.274021 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.435909 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.443576 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.531651 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.546360 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.561469 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.563287 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.694625 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.735281 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.750998 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 08 00:27:01 crc kubenswrapper[4762]: I0308 00:27:01.755678 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.026505 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.407452 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.587385 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.670511 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.837846 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.851306 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 08 00:27:02 crc kubenswrapper[4762]: I0308 00:27:02.889835 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.033522 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.109872 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.117074 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.135738 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.147945 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.248949 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.352544 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.407122 4762 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.462137 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.473164 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.584991 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.608673 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.675987 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.684526 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.702900 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.808711 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 08 00:27:03 crc kubenswrapper[4762]: I0308 00:27:03.931903 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.031008 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.036446 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.094123 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.216057 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.285862 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.333361 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.446578 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.467275 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.548836 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.574787 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.631217 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.705234 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.726492 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.826559 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.847417 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 08 00:27:04 crc kubenswrapper[4762]: I0308 00:27:04.970159 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.119001 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.164559 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.177882 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.350459 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.353226 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.361710 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.452520 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.476095 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.486128 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.603680 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.606836 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.609413 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.667829 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.728755 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.740906 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.778901 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.797315 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.853679 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.899055 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 08 00:27:05 crc kubenswrapper[4762]: I0308 00:27:05.948088 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.088229 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.109529 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.111485 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.188783 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.282679 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.291566 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.304952 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.346338 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.448836 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.521059 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.568126 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.599709 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.599815 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.599888 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.601060 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.601253 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0" gracePeriod=30 Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.710075 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.711446 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.730421 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.738931 4762 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.803513 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.805073 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 08 00:27:06 crc kubenswrapper[4762]: I0308 00:27:06.887749 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.029466 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.056103 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.088313 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.102232 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.202617 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.218888 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.326573 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.336717 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.347841 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.373993 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.477811 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.544043 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.595080 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.597255 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.616488 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.636048 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.668542 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.698820 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.709577 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.757650 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.825099 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.840820 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.954336 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 08 00:27:07 crc kubenswrapper[4762]: I0308 00:27:07.998188 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.023340 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.048871 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.053902 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.102378 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.110665 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.211910 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.260489 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.344968 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.363465 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.491967 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.522618 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.631938 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.689734 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.690602 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.711930 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.752714 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.805868 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.852463 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 08 00:27:08 crc kubenswrapper[4762]: I0308 00:27:08.899019 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.004014 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.006522 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.011737 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.027375 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.071041 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.106832 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.112980 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.194678 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.194678 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.211248 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.215525 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.321566 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.355734 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.412532 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.567793 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.567867 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.580178 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.624392 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.667922 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.673133 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.697960 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.814598 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.859988 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.912521 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.920104 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.936978 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 08 00:27:09 crc kubenswrapper[4762]: I0308 00:27:09.939273 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.010638 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.170301 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.210270 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.245243 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.378459 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.459015 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.469983 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.475451 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.479608 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.529596 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.617677 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.664233 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.681654 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.688367 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.701321 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.701941 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.721450 4762 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.742448 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.872305 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.920946 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 08 00:27:10 crc kubenswrapper[4762]: I0308 00:27:10.956359 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.004663 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.137527 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.302585 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.407252 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.433038 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.448171 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.583675 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.645466 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.737894 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.766413 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.794966 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.812199 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.956681 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 08 00:27:11 crc kubenswrapper[4762]: I0308 00:27:11.990537 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.064128 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.085657 4762 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.086205 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.089336 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.100745 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.289617 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.302626 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.333899 4762 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.339408 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=45.339377405 podStartE2EDuration="45.339377405s" podCreationTimestamp="2026-03-08 00:26:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:26:49.292506843 +0000 UTC m=+230.766651197" watchObservedRunningTime="2026-03-08 00:27:12.339377405 +0000 UTC m=+253.813521789" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.343576 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.343654 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.344382 4762 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.344435 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f11fe626-2001-4b03-8751-2498c02e9969" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.354049 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.386585 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.401989 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.40195477 podStartE2EDuration="23.40195477s" podCreationTimestamp="2026-03-08 00:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:27:12.37209901 +0000 UTC m=+253.846243424" watchObservedRunningTime="2026-03-08 00:27:12.40195477 +0000 UTC m=+253.876099134" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.454884 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.487625 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.531497 4762 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.621252 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.715146 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.729207 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.810355 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.840095 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.851867 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.851957 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.876547 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.895938 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.907101 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 08 00:27:12 crc kubenswrapper[4762]: I0308 00:27:12.995199 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.114409 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.127467 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.172427 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.187499 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.243226 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.430898 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.943519 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 08 00:27:13 crc kubenswrapper[4762]: I0308 00:27:13.963698 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.076006 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.123477 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.198563 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.213632 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.241504 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.245276 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.389442 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.471089 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 08 00:27:14 crc kubenswrapper[4762]: I0308 00:27:14.507518 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 08 00:27:15 crc kubenswrapper[4762]: I0308 00:27:15.228303 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 08 00:27:15 crc kubenswrapper[4762]: I0308 00:27:15.493200 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 08 00:27:15 crc kubenswrapper[4762]: I0308 00:27:15.608794 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 08 00:27:21 crc kubenswrapper[4762]: I0308 00:27:21.947031 4762 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 08 00:27:21 crc kubenswrapper[4762]: I0308 00:27:21.949269 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://990bc585ed8afc9b74fbfd9d199fc06804350d2335ff55d1a5dfbaad265538ca" gracePeriod=5 Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.418614 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.419720 4762 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="990bc585ed8afc9b74fbfd9d199fc06804350d2335ff55d1a5dfbaad265538ca" exitCode=137 Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.545223 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.545305 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679064 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679115 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679177 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679238 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679268 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679600 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679737 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679788 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.679846 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.690662 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.781321 4762 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.781361 4762 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.781375 4762 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.781389 4762 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 08 00:27:27 crc kubenswrapper[4762]: I0308 00:27:27.781402 4762 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 08 00:27:28 crc kubenswrapper[4762]: I0308 00:27:28.430283 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 08 00:27:28 crc kubenswrapper[4762]: I0308 00:27:28.430797 4762 scope.go:117] "RemoveContainer" containerID="990bc585ed8afc9b74fbfd9d199fc06804350d2335ff55d1a5dfbaad265538ca" Mar 08 00:27:28 crc kubenswrapper[4762]: I0308 00:27:28.430927 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.277697 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.277959 4762 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.290751 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.290827 4762 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1d75d35d-d7aa-4bf3-8aef-1ef5c05e8251" Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.293887 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 08 00:27:29 crc kubenswrapper[4762]: I0308 00:27:29.293927 4762 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="1d75d35d-d7aa-4bf3-8aef-1ef5c05e8251" Mar 08 00:27:32 crc kubenswrapper[4762]: I0308 00:27:32.457133 4762 generic.go:334] "Generic (PLEG): container finished" podID="62e4d886-779c-4931-87f7-370090b02132" containerID="b7fb236e6c44d73ccb8718946e751a7f3d78cadfab89932ae5f93ad61ed6f8a4" exitCode=0 Mar 08 00:27:32 crc kubenswrapper[4762]: I0308 00:27:32.457240 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" event={"ID":"62e4d886-779c-4931-87f7-370090b02132","Type":"ContainerDied","Data":"b7fb236e6c44d73ccb8718946e751a7f3d78cadfab89932ae5f93ad61ed6f8a4"} Mar 08 00:27:32 crc kubenswrapper[4762]: I0308 00:27:32.458202 4762 scope.go:117] "RemoveContainer" containerID="b7fb236e6c44d73ccb8718946e751a7f3d78cadfab89932ae5f93ad61ed6f8a4" Mar 08 00:27:33 crc kubenswrapper[4762]: I0308 00:27:33.468680 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" event={"ID":"62e4d886-779c-4931-87f7-370090b02132","Type":"ContainerStarted","Data":"6f9cb375e6c0f683d68485ca6ed1065f77f5d1c26c601ad1830e824940f8eb5d"} Mar 08 00:27:33 crc kubenswrapper[4762]: I0308 00:27:33.469563 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:27:33 crc kubenswrapper[4762]: I0308 00:27:33.471848 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.498577 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.501408 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.502377 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.502479 4762 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0" exitCode=137 Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.502559 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0"} Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.502668 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26ae1944ffd2b77dcc8f996410ea88ed9aec65c78681e51eaed7801dd5610c9f"} Mar 08 00:27:37 crc kubenswrapper[4762]: I0308 00:27:37.502709 4762 scope.go:117] "RemoveContainer" containerID="d7dc43b94483b191269958f97bb05775c3ca2f45710b19afc37d73ea96f44ab0" Mar 08 00:27:38 crc kubenswrapper[4762]: I0308 00:27:38.513473 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 08 00:27:38 crc kubenswrapper[4762]: I0308 00:27:38.516398 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 00:27:42 crc kubenswrapper[4762]: I0308 00:27:42.851874 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:27:42 crc kubenswrapper[4762]: I0308 00:27:42.852370 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:27:45 crc kubenswrapper[4762]: I0308 00:27:45.289497 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:27:46 crc kubenswrapper[4762]: I0308 00:27:46.599437 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:27:46 crc kubenswrapper[4762]: I0308 00:27:46.604816 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:27:47 crc kubenswrapper[4762]: I0308 00:27:47.599196 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.177137 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548828-5t2t6"] Mar 08 00:28:00 crc kubenswrapper[4762]: E0308 00:28:00.179344 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" containerName="installer" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.179419 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" containerName="installer" Mar 08 00:28:00 crc kubenswrapper[4762]: E0308 00:28:00.179462 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.179481 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.179729 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89876a6-46ce-4acd-8078-c41d23a2330e" containerName="installer" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.179832 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.180611 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.183241 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.183494 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.183725 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.188689 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548828-5t2t6"] Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.251862 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hgdc\" (UniqueName: \"kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc\") pod \"auto-csr-approver-29548828-5t2t6\" (UID: \"638abfba-91f5-4c8d-819b-9940c1dddd1c\") " pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.353665 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hgdc\" (UniqueName: \"kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc\") pod \"auto-csr-approver-29548828-5t2t6\" (UID: \"638abfba-91f5-4c8d-819b-9940c1dddd1c\") " pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.387224 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hgdc\" (UniqueName: \"kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc\") pod \"auto-csr-approver-29548828-5t2t6\" (UID: \"638abfba-91f5-4c8d-819b-9940c1dddd1c\") " pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.505934 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:00 crc kubenswrapper[4762]: I0308 00:28:00.992156 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548828-5t2t6"] Mar 08 00:28:01 crc kubenswrapper[4762]: I0308 00:28:01.688982 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" event={"ID":"638abfba-91f5-4c8d-819b-9940c1dddd1c","Type":"ContainerStarted","Data":"da6fe5db1c834636b2ffecff7cd5682a559fbb33864ed785c2594fda515e2290"} Mar 08 00:28:02 crc kubenswrapper[4762]: I0308 00:28:02.697725 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" event={"ID":"638abfba-91f5-4c8d-819b-9940c1dddd1c","Type":"ContainerStarted","Data":"04aefecf14b583bb4d35b3b92c22dfb189479db936045c79892a84465ab36fa4"} Mar 08 00:28:02 crc kubenswrapper[4762]: I0308 00:28:02.715614 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" podStartSLOduration=1.518516429 podStartE2EDuration="2.715596458s" podCreationTimestamp="2026-03-08 00:28:00 +0000 UTC" firstStartedPulling="2026-03-08 00:28:01.022232783 +0000 UTC m=+302.496377167" lastFinishedPulling="2026-03-08 00:28:02.219312812 +0000 UTC m=+303.693457196" observedRunningTime="2026-03-08 00:28:02.713365024 +0000 UTC m=+304.187509408" watchObservedRunningTime="2026-03-08 00:28:02.715596458 +0000 UTC m=+304.189740812" Mar 08 00:28:03 crc kubenswrapper[4762]: I0308 00:28:03.708149 4762 generic.go:334] "Generic (PLEG): container finished" podID="638abfba-91f5-4c8d-819b-9940c1dddd1c" containerID="04aefecf14b583bb4d35b3b92c22dfb189479db936045c79892a84465ab36fa4" exitCode=0 Mar 08 00:28:03 crc kubenswrapper[4762]: I0308 00:28:03.708236 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" event={"ID":"638abfba-91f5-4c8d-819b-9940c1dddd1c","Type":"ContainerDied","Data":"04aefecf14b583bb4d35b3b92c22dfb189479db936045c79892a84465ab36fa4"} Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.032284 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.117006 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hgdc\" (UniqueName: \"kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc\") pod \"638abfba-91f5-4c8d-819b-9940c1dddd1c\" (UID: \"638abfba-91f5-4c8d-819b-9940c1dddd1c\") " Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.126667 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc" (OuterVolumeSpecName: "kube-api-access-6hgdc") pod "638abfba-91f5-4c8d-819b-9940c1dddd1c" (UID: "638abfba-91f5-4c8d-819b-9940c1dddd1c"). InnerVolumeSpecName "kube-api-access-6hgdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.219351 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hgdc\" (UniqueName: \"kubernetes.io/projected/638abfba-91f5-4c8d-819b-9940c1dddd1c-kube-api-access-6hgdc\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.725318 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" event={"ID":"638abfba-91f5-4c8d-819b-9940c1dddd1c","Type":"ContainerDied","Data":"da6fe5db1c834636b2ffecff7cd5682a559fbb33864ed785c2594fda515e2290"} Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.725847 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da6fe5db1c834636b2ffecff7cd5682a559fbb33864ed785c2594fda515e2290" Mar 08 00:28:05 crc kubenswrapper[4762]: I0308 00:28:05.725639 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548828-5t2t6" Mar 08 00:28:12 crc kubenswrapper[4762]: I0308 00:28:12.852313 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:28:12 crc kubenswrapper[4762]: I0308 00:28:12.852790 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:28:12 crc kubenswrapper[4762]: I0308 00:28:12.852848 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:28:12 crc kubenswrapper[4762]: I0308 00:28:12.853575 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:28:12 crc kubenswrapper[4762]: I0308 00:28:12.853644 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73" gracePeriod=600 Mar 08 00:28:13 crc kubenswrapper[4762]: I0308 00:28:13.785806 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73" exitCode=0 Mar 08 00:28:13 crc kubenswrapper[4762]: I0308 00:28:13.785849 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73"} Mar 08 00:28:13 crc kubenswrapper[4762]: I0308 00:28:13.786347 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521"} Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.533379 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.536308 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qv7hs" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="registry-server" containerID="cri-o://b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14" gracePeriod=30 Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.549377 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.549673 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dcsdg" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="registry-server" containerID="cri-o://ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd" gracePeriod=30 Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.576753 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.577377 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" containerID="cri-o://6f9cb375e6c0f683d68485ca6ed1065f77f5d1c26c601ad1830e824940f8eb5d" gracePeriod=30 Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.588877 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.589239 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wc85x" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="registry-server" containerID="cri-o://0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475" gracePeriod=30 Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.606574 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.607335 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-l4c9k" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="registry-server" containerID="cri-o://f5a6dacb6984b894bc6c80aab5ad9ea6a896efbadda90eb28afac6d737b48a9b" gracePeriod=30 Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.628878 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sxtbp"] Mar 08 00:28:47 crc kubenswrapper[4762]: E0308 00:28:47.629404 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638abfba-91f5-4c8d-819b-9940c1dddd1c" containerName="oc" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.629427 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="638abfba-91f5-4c8d-819b-9940c1dddd1c" containerName="oc" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.629698 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="638abfba-91f5-4c8d-819b-9940c1dddd1c" containerName="oc" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.631337 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.644790 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sxtbp"] Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.667024 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.667112 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.667187 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt2f2\" (UniqueName: \"kubernetes.io/projected/45e73cf0-17af-446f-8a92-5c45dee4ee00-kube-api-access-qt2f2\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.802357 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.802543 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.802655 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt2f2\" (UniqueName: \"kubernetes.io/projected/45e73cf0-17af-446f-8a92-5c45dee4ee00-kube-api-access-qt2f2\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.805806 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.818086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/45e73cf0-17af-446f-8a92-5c45dee4ee00-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.823469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt2f2\" (UniqueName: \"kubernetes.io/projected/45e73cf0-17af-446f-8a92-5c45dee4ee00-kube-api-access-qt2f2\") pod \"marketplace-operator-79b997595-sxtbp\" (UID: \"45e73cf0-17af-446f-8a92-5c45dee4ee00\") " pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.993483 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:47 crc kubenswrapper[4762]: I0308 00:28:47.996712 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.017108 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.034181 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.034801 4762 generic.go:334] "Generic (PLEG): container finished" podID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerID="f5a6dacb6984b894bc6c80aab5ad9ea6a896efbadda90eb28afac6d737b48a9b" exitCode=0 Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.034896 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerDied","Data":"f5a6dacb6984b894bc6c80aab5ad9ea6a896efbadda90eb28afac6d737b48a9b"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.037850 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerID="b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14" exitCode=0 Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.037981 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qv7hs" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.040055 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerDied","Data":"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.040367 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qv7hs" event={"ID":"1b1f4525-a957-4708-b166-0b16f67cb20a","Type":"ContainerDied","Data":"b5a4dc08aadd4dc756fd5a63203b83065f052e5a9b57cd269de436abbc056faa"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.040425 4762 scope.go:117] "RemoveContainer" containerID="b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.065102 4762 generic.go:334] "Generic (PLEG): container finished" podID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerID="ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd" exitCode=0 Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.065304 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerDied","Data":"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.065327 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dcsdg" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.066226 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dcsdg" event={"ID":"210aa3ef-23bb-4e7b-9ff5-39cec85310ba","Type":"ContainerDied","Data":"7b6cf5dad20512ca4374e24fa243c257373a5f582e69e8a739248c71206a82ea"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.075684 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.083376 4762 generic.go:334] "Generic (PLEG): container finished" podID="63ac2172-da6d-436b-8cde-593837d65920" containerID="0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475" exitCode=0 Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.083558 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerDied","Data":"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.083624 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wc85x" event={"ID":"63ac2172-da6d-436b-8cde-593837d65920","Type":"ContainerDied","Data":"147ba06789405ddc0730aab97de66e98d94bafdaf1a1a0b8939a030049a60d44"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.083740 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wc85x" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.086027 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.092142 4762 generic.go:334] "Generic (PLEG): container finished" podID="62e4d886-779c-4931-87f7-370090b02132" containerID="6f9cb375e6c0f683d68485ca6ed1065f77f5d1c26c601ad1830e824940f8eb5d" exitCode=0 Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.092213 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" event={"ID":"62e4d886-779c-4931-87f7-370090b02132","Type":"ContainerDied","Data":"6f9cb375e6c0f683d68485ca6ed1065f77f5d1c26c601ad1830e824940f8eb5d"} Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.092297 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mg6jl" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.104946 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content\") pod \"63ac2172-da6d-436b-8cde-593837d65920\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105017 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content\") pod \"1b1f4525-a957-4708-b166-0b16f67cb20a\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105046 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities\") pod \"63ac2172-da6d-436b-8cde-593837d65920\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105088 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqll\" (UniqueName: \"kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll\") pod \"63ac2172-da6d-436b-8cde-593837d65920\" (UID: \"63ac2172-da6d-436b-8cde-593837d65920\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105127 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcc5q\" (UniqueName: \"kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q\") pod \"1b1f4525-a957-4708-b166-0b16f67cb20a\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105206 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities\") pod \"1b1f4525-a957-4708-b166-0b16f67cb20a\" (UID: \"1b1f4525-a957-4708-b166-0b16f67cb20a\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105223 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities\") pod \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105270 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8s6q\" (UniqueName: \"kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q\") pod \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.105298 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content\") pod \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\" (UID: \"210aa3ef-23bb-4e7b-9ff5-39cec85310ba\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.106982 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities" (OuterVolumeSpecName: "utilities") pod "63ac2172-da6d-436b-8cde-593837d65920" (UID: "63ac2172-da6d-436b-8cde-593837d65920"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.107039 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities" (OuterVolumeSpecName: "utilities") pod "210aa3ef-23bb-4e7b-9ff5-39cec85310ba" (UID: "210aa3ef-23bb-4e7b-9ff5-39cec85310ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.108895 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities" (OuterVolumeSpecName: "utilities") pod "1b1f4525-a957-4708-b166-0b16f67cb20a" (UID: "1b1f4525-a957-4708-b166-0b16f67cb20a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.113450 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q" (OuterVolumeSpecName: "kube-api-access-r8s6q") pod "210aa3ef-23bb-4e7b-9ff5-39cec85310ba" (UID: "210aa3ef-23bb-4e7b-9ff5-39cec85310ba"). InnerVolumeSpecName "kube-api-access-r8s6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.113590 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q" (OuterVolumeSpecName: "kube-api-access-tcc5q") pod "1b1f4525-a957-4708-b166-0b16f67cb20a" (UID: "1b1f4525-a957-4708-b166-0b16f67cb20a"). InnerVolumeSpecName "kube-api-access-tcc5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.113746 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll" (OuterVolumeSpecName: "kube-api-access-vzqll") pod "63ac2172-da6d-436b-8cde-593837d65920" (UID: "63ac2172-da6d-436b-8cde-593837d65920"). InnerVolumeSpecName "kube-api-access-vzqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.115024 4762 scope.go:117] "RemoveContainer" containerID="7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.167211 4762 scope.go:117] "RemoveContainer" containerID="6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.167744 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63ac2172-da6d-436b-8cde-593837d65920" (UID: "63ac2172-da6d-436b-8cde-593837d65920"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.203509 4762 scope.go:117] "RemoveContainer" containerID="b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.206215 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14\": container with ID starting with b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14 not found: ID does not exist" containerID="b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.206390 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14"} err="failed to get container status \"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14\": rpc error: code = NotFound desc = could not find container \"b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14\": container with ID starting with b294f9a5fbd7ca36fedf6d08a9d63ed6ddcdb20c565c393d163ef8758a5e0a14 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.206560 4762 scope.go:117] "RemoveContainer" containerID="7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.206594 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content\") pod \"30000013-c882-4eaa-a7f0-fc380ef4f09c\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.207178 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235\": container with ID starting with 7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235 not found: ID does not exist" containerID="7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.207239 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235"} err="failed to get container status \"7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235\": rpc error: code = NotFound desc = could not find container \"7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235\": container with ID starting with 7c5a9cfc9a7f236f403a919958504d229b649c5d289508086b6ba2bd5065a235 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.207269 4762 scope.go:117] "RemoveContainer" containerID="6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.207701 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f\": container with ID starting with 6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f not found: ID does not exist" containerID="6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.207833 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f"} err="failed to get container status \"6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f\": rpc error: code = NotFound desc = could not find container \"6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f\": container with ID starting with 6ba319e720db6f9b62f4f7e2208e79a9554a80b3a30f1d520f337309b4d2da5f not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.207932 4762 scope.go:117] "RemoveContainer" containerID="ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.208559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics\") pod \"62e4d886-779c-4931-87f7-370090b02132\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.210705 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htklq\" (UniqueName: \"kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq\") pod \"62e4d886-779c-4931-87f7-370090b02132\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.210984 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities\") pod \"30000013-c882-4eaa-a7f0-fc380ef4f09c\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.211101 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn5lb\" (UniqueName: \"kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb\") pod \"30000013-c882-4eaa-a7f0-fc380ef4f09c\" (UID: \"30000013-c882-4eaa-a7f0-fc380ef4f09c\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.211311 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca\") pod \"62e4d886-779c-4931-87f7-370090b02132\" (UID: \"62e4d886-779c-4931-87f7-370090b02132\") " Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.211847 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities" (OuterVolumeSpecName: "utilities") pod "30000013-c882-4eaa-a7f0-fc380ef4f09c" (UID: "30000013-c882-4eaa-a7f0-fc380ef4f09c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212007 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqll\" (UniqueName: \"kubernetes.io/projected/63ac2172-da6d-436b-8cde-593837d65920-kube-api-access-vzqll\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212090 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcc5q\" (UniqueName: \"kubernetes.io/projected/1b1f4525-a957-4708-b166-0b16f67cb20a-kube-api-access-tcc5q\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212181 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212246 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212314 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8s6q\" (UniqueName: \"kubernetes.io/projected/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-kube-api-access-r8s6q\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212374 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212438 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63ac2172-da6d-436b-8cde-593837d65920-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.212301 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "210aa3ef-23bb-4e7b-9ff5-39cec85310ba" (UID: "210aa3ef-23bb-4e7b-9ff5-39cec85310ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.213334 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "62e4d886-779c-4931-87f7-370090b02132" (UID: "62e4d886-779c-4931-87f7-370090b02132"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.217039 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb" (OuterVolumeSpecName: "kube-api-access-pn5lb") pod "30000013-c882-4eaa-a7f0-fc380ef4f09c" (UID: "30000013-c882-4eaa-a7f0-fc380ef4f09c"). InnerVolumeSpecName "kube-api-access-pn5lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.220512 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "62e4d886-779c-4931-87f7-370090b02132" (UID: "62e4d886-779c-4931-87f7-370090b02132"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.220600 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq" (OuterVolumeSpecName: "kube-api-access-htklq") pod "62e4d886-779c-4931-87f7-370090b02132" (UID: "62e4d886-779c-4931-87f7-370090b02132"). InnerVolumeSpecName "kube-api-access-htklq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.224092 4762 scope.go:117] "RemoveContainer" containerID="9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.224490 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b1f4525-a957-4708-b166-0b16f67cb20a" (UID: "1b1f4525-a957-4708-b166-0b16f67cb20a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.242396 4762 scope.go:117] "RemoveContainer" containerID="bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.258359 4762 scope.go:117] "RemoveContainer" containerID="ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.258712 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd\": container with ID starting with ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd not found: ID does not exist" containerID="ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.258747 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd"} err="failed to get container status \"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd\": rpc error: code = NotFound desc = could not find container \"ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd\": container with ID starting with ea6e393a9fbec1214b9250b27dae9441d4f4912ecdafbb23780dba80212cbfdd not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.258794 4762 scope.go:117] "RemoveContainer" containerID="9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.259320 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632\": container with ID starting with 9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632 not found: ID does not exist" containerID="9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.259383 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632"} err="failed to get container status \"9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632\": rpc error: code = NotFound desc = could not find container \"9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632\": container with ID starting with 9999d9678b1098311895679b6af20c93e1c792731398f2e13087265dc2ddb632 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.259421 4762 scope.go:117] "RemoveContainer" containerID="bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.259929 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13\": container with ID starting with bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13 not found: ID does not exist" containerID="bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.259978 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13"} err="failed to get container status \"bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13\": rpc error: code = NotFound desc = could not find container \"bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13\": container with ID starting with bafde9770a2310e4f7abd9331d59e7ee3103b84eb1f6c5d5e20ed96a99134a13 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.260017 4762 scope.go:117] "RemoveContainer" containerID="0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.273505 4762 scope.go:117] "RemoveContainer" containerID="f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.286730 4762 scope.go:117] "RemoveContainer" containerID="1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.301640 4762 scope.go:117] "RemoveContainer" containerID="0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.302277 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475\": container with ID starting with 0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475 not found: ID does not exist" containerID="0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.302326 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475"} err="failed to get container status \"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475\": rpc error: code = NotFound desc = could not find container \"0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475\": container with ID starting with 0f1d913f26c675392b272c522cd3e99bc41e7fd5f326449d03c114311f14a475 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.302366 4762 scope.go:117] "RemoveContainer" containerID="f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.302930 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360\": container with ID starting with f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360 not found: ID does not exist" containerID="f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.302991 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360"} err="failed to get container status \"f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360\": rpc error: code = NotFound desc = could not find container \"f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360\": container with ID starting with f1976e605dbc7317ec94274f6e4489f4035f976bbf3f23c0440163e028025360 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.303039 4762 scope.go:117] "RemoveContainer" containerID="1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279" Mar 08 00:28:48 crc kubenswrapper[4762]: E0308 00:28:48.303402 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279\": container with ID starting with 1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279 not found: ID does not exist" containerID="1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.303432 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279"} err="failed to get container status \"1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279\": rpc error: code = NotFound desc = could not find container \"1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279\": container with ID starting with 1ba4a2533de809584140b4be228d0d78eb54d55a4fc17c8edc50140429063279 not found: ID does not exist" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.303450 4762 scope.go:117] "RemoveContainer" containerID="6f9cb375e6c0f683d68485ca6ed1065f77f5d1c26c601ad1830e824940f8eb5d" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.313954 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/210aa3ef-23bb-4e7b-9ff5-39cec85310ba-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.313988 4762 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/62e4d886-779c-4931-87f7-370090b02132-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.313999 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b1f4525-a957-4708-b166-0b16f67cb20a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.314014 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htklq\" (UniqueName: \"kubernetes.io/projected/62e4d886-779c-4931-87f7-370090b02132-kube-api-access-htklq\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.314025 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.314037 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn5lb\" (UniqueName: \"kubernetes.io/projected/30000013-c882-4eaa-a7f0-fc380ef4f09c-kube-api-access-pn5lb\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.314047 4762 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/62e4d886-779c-4931-87f7-370090b02132-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.317256 4762 scope.go:117] "RemoveContainer" containerID="b7fb236e6c44d73ccb8718946e751a7f3d78cadfab89932ae5f93ad61ed6f8a4" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.347858 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30000013-c882-4eaa-a7f0-fc380ef4f09c" (UID: "30000013-c882-4eaa-a7f0-fc380ef4f09c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.371632 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.376173 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qv7hs"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.412078 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.415150 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30000013-c882-4eaa-a7f0-fc380ef4f09c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.415150 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dcsdg"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.422675 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.434885 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wc85x"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.449915 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.454146 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mg6jl"] Mar 08 00:28:48 crc kubenswrapper[4762]: I0308 00:28:48.457700 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sxtbp"] Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.106207 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l4c9k" event={"ID":"30000013-c882-4eaa-a7f0-fc380ef4f09c","Type":"ContainerDied","Data":"fc2f74775b5368a3707dd41f9fae880938446105bb15b0a682dfa6f19f78f319"} Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.106261 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l4c9k" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.106273 4762 scope.go:117] "RemoveContainer" containerID="f5a6dacb6984b894bc6c80aab5ad9ea6a896efbadda90eb28afac6d737b48a9b" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.108476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" event={"ID":"45e73cf0-17af-446f-8a92-5c45dee4ee00","Type":"ContainerStarted","Data":"128d74312fffa3c8fa2cc0415d3b38e52894c4ab314791a49d7c0852977f7dff"} Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.108514 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" event={"ID":"45e73cf0-17af-446f-8a92-5c45dee4ee00","Type":"ContainerStarted","Data":"a7e113079eb64a8ee85efcdf4a6e5cf7ad0801df6964e98cea55871aeb9c7c5c"} Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.109041 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.114174 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.128217 4762 scope.go:117] "RemoveContainer" containerID="38a1afdae8e8c7e0c4d72770d10fb16878506bb51fa550f35139f8f08ebf3670" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.157862 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" podStartSLOduration=2.157838648 podStartE2EDuration="2.157838648s" podCreationTimestamp="2026-03-08 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:28:49.128845497 +0000 UTC m=+350.602989851" watchObservedRunningTime="2026-03-08 00:28:49.157838648 +0000 UTC m=+350.631982992" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.164973 4762 scope.go:117] "RemoveContainer" containerID="cc98301ee742ffeee9934eab6b331bfca942453110de48d5f75cfb367aaab0b3" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.170792 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.193465 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-l4c9k"] Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.281922 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" path="/var/lib/kubelet/pods/1b1f4525-a957-4708-b166-0b16f67cb20a/volumes" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.282922 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" path="/var/lib/kubelet/pods/210aa3ef-23bb-4e7b-9ff5-39cec85310ba/volumes" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.284073 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" path="/var/lib/kubelet/pods/30000013-c882-4eaa-a7f0-fc380ef4f09c/volumes" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.285569 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e4d886-779c-4931-87f7-370090b02132" path="/var/lib/kubelet/pods/62e4d886-779c-4931-87f7-370090b02132/volumes" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.286528 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ac2172-da6d-436b-8cde-593837d65920" path="/var/lib/kubelet/pods/63ac2172-da6d-436b-8cde-593837d65920/volumes" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762266 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7xslc"] Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762620 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762642 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762660 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762672 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762689 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762700 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762716 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762725 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762739 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762748 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762779 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762787 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762803 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762816 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762829 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762840 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="extract-utilities" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762854 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762863 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762880 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762890 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762910 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762923 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="extract-content" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762936 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762947 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.762960 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.762968 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763117 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="30000013-c882-4eaa-a7f0-fc380ef4f09c" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763152 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763169 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="210aa3ef-23bb-4e7b-9ff5-39cec85310ba" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763178 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b1f4525-a957-4708-b166-0b16f67cb20a" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763191 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ac2172-da6d-436b-8cde-593837d65920" containerName="registry-server" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763205 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: E0308 00:28:49.763367 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.763381 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e4d886-779c-4931-87f7-370090b02132" containerName="marketplace-operator" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.764413 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.767340 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.775696 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xslc"] Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.933920 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-catalog-content\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.934047 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-utilities\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:49 crc kubenswrapper[4762]: I0308 00:28:49.934183 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgg9h\" (UniqueName: \"kubernetes.io/projected/b83aab9a-f794-43d3-af07-0a00dac138da-kube-api-access-hgg9h\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.035537 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgg9h\" (UniqueName: \"kubernetes.io/projected/b83aab9a-f794-43d3-af07-0a00dac138da-kube-api-access-hgg9h\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.035601 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-catalog-content\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.035633 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-utilities\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.036261 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-catalog-content\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.036274 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b83aab9a-f794-43d3-af07-0a00dac138da-utilities\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.072784 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgg9h\" (UniqueName: \"kubernetes.io/projected/b83aab9a-f794-43d3-af07-0a00dac138da-kube-api-access-hgg9h\") pod \"redhat-marketplace-7xslc\" (UID: \"b83aab9a-f794-43d3-af07-0a00dac138da\") " pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.096062 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.364848 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8szql"] Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.370210 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.371751 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7xslc"] Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.374394 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 08 00:28:50 crc kubenswrapper[4762]: W0308 00:28:50.377974 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb83aab9a_f794_43d3_af07_0a00dac138da.slice/crio-4d7f2f88dd6468cdce72c557e0aa40e4d50b05d195868ae9b2ae1112bad90fd1 WatchSource:0}: Error finding container 4d7f2f88dd6468cdce72c557e0aa40e4d50b05d195868ae9b2ae1112bad90fd1: Status 404 returned error can't find the container with id 4d7f2f88dd6468cdce72c557e0aa40e4d50b05d195868ae9b2ae1112bad90fd1 Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.403160 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8szql"] Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.461053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-catalog-content\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.461296 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-utilities\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.461394 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqhwg\" (UniqueName: \"kubernetes.io/projected/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-kube-api-access-sqhwg\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.562253 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-catalog-content\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.562822 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-utilities\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.562904 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqhwg\" (UniqueName: \"kubernetes.io/projected/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-kube-api-access-sqhwg\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.563102 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-catalog-content\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.563355 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-utilities\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.587972 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqhwg\" (UniqueName: \"kubernetes.io/projected/c578d6b5-daa2-4fd3-88ee-29ab82caaa5a-kube-api-access-sqhwg\") pod \"redhat-operators-8szql\" (UID: \"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a\") " pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.728068 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:28:50 crc kubenswrapper[4762]: I0308 00:28:50.983418 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8szql"] Mar 08 00:28:50 crc kubenswrapper[4762]: W0308 00:28:50.993788 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc578d6b5_daa2_4fd3_88ee_29ab82caaa5a.slice/crio-3b0e34adb0ff9b8bf950952f4fc885a568db31a5fc98d09be2fe290330d63676 WatchSource:0}: Error finding container 3b0e34adb0ff9b8bf950952f4fc885a568db31a5fc98d09be2fe290330d63676: Status 404 returned error can't find the container with id 3b0e34adb0ff9b8bf950952f4fc885a568db31a5fc98d09be2fe290330d63676 Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.139743 4762 generic.go:334] "Generic (PLEG): container finished" podID="b83aab9a-f794-43d3-af07-0a00dac138da" containerID="5435cdfdb754cde09565d5eee634952330fee87f847624bce840acf847748c59" exitCode=0 Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.139869 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xslc" event={"ID":"b83aab9a-f794-43d3-af07-0a00dac138da","Type":"ContainerDied","Data":"5435cdfdb754cde09565d5eee634952330fee87f847624bce840acf847748c59"} Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.140398 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xslc" event={"ID":"b83aab9a-f794-43d3-af07-0a00dac138da","Type":"ContainerStarted","Data":"4d7f2f88dd6468cdce72c557e0aa40e4d50b05d195868ae9b2ae1112bad90fd1"} Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.144052 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8szql" event={"ID":"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a","Type":"ContainerStarted","Data":"3b0e34adb0ff9b8bf950952f4fc885a568db31a5fc98d09be2fe290330d63676"} Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.578209 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpcwq"] Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.579338 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.594644 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpcwq"] Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679495 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-certificates\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679553 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-bound-sa-token\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679582 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-tls\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679599 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ba7245b9-c69a-44e9-bbff-61213cb5a743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679630 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ba7245b9-c69a-44e9-bbff-61213cb5a743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679652 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vc77\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-kube-api-access-5vc77\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679845 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-trusted-ca\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.679949 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.707130 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.781951 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-tls\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782017 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ba7245b9-c69a-44e9-bbff-61213cb5a743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782054 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ba7245b9-c69a-44e9-bbff-61213cb5a743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782080 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vc77\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-kube-api-access-5vc77\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782102 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-trusted-ca\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782147 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-certificates\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.782166 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-bound-sa-token\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.783678 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ba7245b9-c69a-44e9-bbff-61213cb5a743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.784980 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-trusted-ca\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.786181 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-certificates\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.790486 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ba7245b9-c69a-44e9-bbff-61213cb5a743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.790666 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-registry-tls\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.798174 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vc77\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-kube-api-access-5vc77\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.800588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba7245b9-c69a-44e9-bbff-61213cb5a743-bound-sa-token\") pod \"image-registry-66df7c8f76-xpcwq\" (UID: \"ba7245b9-c69a-44e9-bbff-61213cb5a743\") " pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:51 crc kubenswrapper[4762]: I0308 00:28:51.901035 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.104891 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xpcwq"] Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.151686 4762 generic.go:334] "Generic (PLEG): container finished" podID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerID="d0076677a3073e920a7ceaa8eeacdd6a9d3001a239ba1f81798722dc259cf074" exitCode=0 Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.151770 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8szql" event={"ID":"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a","Type":"ContainerDied","Data":"d0076677a3073e920a7ceaa8eeacdd6a9d3001a239ba1f81798722dc259cf074"} Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.152635 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" event={"ID":"ba7245b9-c69a-44e9-bbff-61213cb5a743","Type":"ContainerStarted","Data":"395b5781601a7c7396636e3467c0e1c2d3f1e7dcd6a80cff85deb11612974067"} Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.155056 4762 generic.go:334] "Generic (PLEG): container finished" podID="b83aab9a-f794-43d3-af07-0a00dac138da" containerID="129919b8ef25a5ae394067802c44cf2ef398de43d61640ff837f73477d674e6b" exitCode=0 Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.155079 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xslc" event={"ID":"b83aab9a-f794-43d3-af07-0a00dac138da","Type":"ContainerDied","Data":"129919b8ef25a5ae394067802c44cf2ef398de43d61640ff837f73477d674e6b"} Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.156363 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-78hq2"] Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.158777 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.162142 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78hq2"] Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.171944 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.294819 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-catalog-content\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.295439 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-utilities\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.295524 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2htb\" (UniqueName: \"kubernetes.io/projected/4f3c2509-9848-4e76-96ae-8f815f66d6d7-kube-api-access-q2htb\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.398014 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-catalog-content\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.398125 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-utilities\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.398228 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2htb\" (UniqueName: \"kubernetes.io/projected/4f3c2509-9848-4e76-96ae-8f815f66d6d7-kube-api-access-q2htb\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.398610 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-catalog-content\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.398901 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f3c2509-9848-4e76-96ae-8f815f66d6d7-utilities\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.421833 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2htb\" (UniqueName: \"kubernetes.io/projected/4f3c2509-9848-4e76-96ae-8f815f66d6d7-kube-api-access-q2htb\") pod \"certified-operators-78hq2\" (UID: \"4f3c2509-9848-4e76-96ae-8f815f66d6d7\") " pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.609847 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.764004 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rfbxq"] Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.766508 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.768650 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.773628 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfbxq"] Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.903003 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-catalog-content\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.903052 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxpxz\" (UniqueName: \"kubernetes.io/projected/0870b34f-2648-451a-a34e-8555e4e4982a-kube-api-access-wxpxz\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:52 crc kubenswrapper[4762]: I0308 00:28:52.903149 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-utilities\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.004875 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-catalog-content\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.004934 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxpxz\" (UniqueName: \"kubernetes.io/projected/0870b34f-2648-451a-a34e-8555e4e4982a-kube-api-access-wxpxz\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.004993 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-utilities\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.005701 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-catalog-content\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.005802 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0870b34f-2648-451a-a34e-8555e4e4982a-utilities\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.030833 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxpxz\" (UniqueName: \"kubernetes.io/projected/0870b34f-2648-451a-a34e-8555e4e4982a-kube-api-access-wxpxz\") pod \"community-operators-rfbxq\" (UID: \"0870b34f-2648-451a-a34e-8555e4e4982a\") " pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.044969 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-78hq2"] Mar 08 00:28:53 crc kubenswrapper[4762]: W0308 00:28:53.053981 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f3c2509_9848_4e76_96ae_8f815f66d6d7.slice/crio-9bdb6f51f500019c65db19021ff2d261c8898b2563e5d5837ec9b7d98df94af1 WatchSource:0}: Error finding container 9bdb6f51f500019c65db19021ff2d261c8898b2563e5d5837ec9b7d98df94af1: Status 404 returned error can't find the container with id 9bdb6f51f500019c65db19021ff2d261c8898b2563e5d5837ec9b7d98df94af1 Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.103885 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.170028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78hq2" event={"ID":"4f3c2509-9848-4e76-96ae-8f815f66d6d7","Type":"ContainerStarted","Data":"9bdb6f51f500019c65db19021ff2d261c8898b2563e5d5837ec9b7d98df94af1"} Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.172126 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8szql" event={"ID":"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a","Type":"ContainerStarted","Data":"ca1493bd9cd7c0fe1978fe5c184571dcfe0ce89d6e6d8e9f679507f773b4ee25"} Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.176274 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" event={"ID":"ba7245b9-c69a-44e9-bbff-61213cb5a743","Type":"ContainerStarted","Data":"9e7b2c4cd0acc7110a36e96d5236e98626b90a35a93e46f651dab4158325045f"} Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.176590 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.181862 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7xslc" event={"ID":"b83aab9a-f794-43d3-af07-0a00dac138da","Type":"ContainerStarted","Data":"087a46a5d76946542b0207e59369fb0ded97e44987c9790142fbdf8c14814617"} Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.254909 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7xslc" podStartSLOduration=2.701328514 podStartE2EDuration="4.254867452s" podCreationTimestamp="2026-03-08 00:28:49 +0000 UTC" firstStartedPulling="2026-03-08 00:28:51.141873832 +0000 UTC m=+352.616018186" lastFinishedPulling="2026-03-08 00:28:52.69541277 +0000 UTC m=+354.169557124" observedRunningTime="2026-03-08 00:28:53.245866623 +0000 UTC m=+354.720010977" watchObservedRunningTime="2026-03-08 00:28:53.254867452 +0000 UTC m=+354.729011796" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.272443 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" podStartSLOduration=2.272426008 podStartE2EDuration="2.272426008s" podCreationTimestamp="2026-03-08 00:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:28:53.26896737 +0000 UTC m=+354.743111714" watchObservedRunningTime="2026-03-08 00:28:53.272426008 +0000 UTC m=+354.746570352" Mar 08 00:28:53 crc kubenswrapper[4762]: I0308 00:28:53.372177 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rfbxq"] Mar 08 00:28:53 crc kubenswrapper[4762]: W0308 00:28:53.378610 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0870b34f_2648_451a_a34e_8555e4e4982a.slice/crio-5d1fe0322e246ca3c52307da10041100066ff837b19021d87654b01d844dff1a WatchSource:0}: Error finding container 5d1fe0322e246ca3c52307da10041100066ff837b19021d87654b01d844dff1a: Status 404 returned error can't find the container with id 5d1fe0322e246ca3c52307da10041100066ff837b19021d87654b01d844dff1a Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.187820 4762 generic.go:334] "Generic (PLEG): container finished" podID="0870b34f-2648-451a-a34e-8555e4e4982a" containerID="05dc2fbcc2345f45f8a35bbb888919d7779a0edd46aaa16c593010315c4f9a06" exitCode=0 Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.187887 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfbxq" event={"ID":"0870b34f-2648-451a-a34e-8555e4e4982a","Type":"ContainerDied","Data":"05dc2fbcc2345f45f8a35bbb888919d7779a0edd46aaa16c593010315c4f9a06"} Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.187911 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfbxq" event={"ID":"0870b34f-2648-451a-a34e-8555e4e4982a","Type":"ContainerStarted","Data":"5d1fe0322e246ca3c52307da10041100066ff837b19021d87654b01d844dff1a"} Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.190978 4762 generic.go:334] "Generic (PLEG): container finished" podID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerID="ca1493bd9cd7c0fe1978fe5c184571dcfe0ce89d6e6d8e9f679507f773b4ee25" exitCode=0 Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.191011 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8szql" event={"ID":"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a","Type":"ContainerDied","Data":"ca1493bd9cd7c0fe1978fe5c184571dcfe0ce89d6e6d8e9f679507f773b4ee25"} Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.193053 4762 generic.go:334] "Generic (PLEG): container finished" podID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerID="d46256da9ab9f13494b4b594628e26d9a4681461054518ccd07f89d157672703" exitCode=0 Mar 08 00:28:54 crc kubenswrapper[4762]: I0308 00:28:54.193122 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78hq2" event={"ID":"4f3c2509-9848-4e76-96ae-8f815f66d6d7","Type":"ContainerDied","Data":"d46256da9ab9f13494b4b594628e26d9a4681461054518ccd07f89d157672703"} Mar 08 00:28:55 crc kubenswrapper[4762]: I0308 00:28:55.200748 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78hq2" event={"ID":"4f3c2509-9848-4e76-96ae-8f815f66d6d7","Type":"ContainerStarted","Data":"a1cc07df25fa7baf5b1836f9d6270a7244563eecb572c6726b25f91f05582a02"} Mar 08 00:28:55 crc kubenswrapper[4762]: I0308 00:28:55.203437 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfbxq" event={"ID":"0870b34f-2648-451a-a34e-8555e4e4982a","Type":"ContainerStarted","Data":"31e92e1c6198dd2f819c9aa4b96c17c4a29d4d913dd44c573d896f535e538985"} Mar 08 00:28:55 crc kubenswrapper[4762]: I0308 00:28:55.205889 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8szql" event={"ID":"c578d6b5-daa2-4fd3-88ee-29ab82caaa5a","Type":"ContainerStarted","Data":"9725416bafc6de855a0c0fb900fd1a6dce5dd3cdef849f328a43790798550a83"} Mar 08 00:28:55 crc kubenswrapper[4762]: I0308 00:28:55.278621 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8szql" podStartSLOduration=2.863590365 podStartE2EDuration="5.278600499s" podCreationTimestamp="2026-03-08 00:28:50 +0000 UTC" firstStartedPulling="2026-03-08 00:28:52.176028893 +0000 UTC m=+353.650173237" lastFinishedPulling="2026-03-08 00:28:54.591038987 +0000 UTC m=+356.065183371" observedRunningTime="2026-03-08 00:28:55.274397669 +0000 UTC m=+356.748542023" watchObservedRunningTime="2026-03-08 00:28:55.278600499 +0000 UTC m=+356.752744853" Mar 08 00:28:56 crc kubenswrapper[4762]: I0308 00:28:56.212356 4762 generic.go:334] "Generic (PLEG): container finished" podID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerID="a1cc07df25fa7baf5b1836f9d6270a7244563eecb572c6726b25f91f05582a02" exitCode=0 Mar 08 00:28:56 crc kubenswrapper[4762]: I0308 00:28:56.212443 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78hq2" event={"ID":"4f3c2509-9848-4e76-96ae-8f815f66d6d7","Type":"ContainerDied","Data":"a1cc07df25fa7baf5b1836f9d6270a7244563eecb572c6726b25f91f05582a02"} Mar 08 00:28:56 crc kubenswrapper[4762]: I0308 00:28:56.214702 4762 generic.go:334] "Generic (PLEG): container finished" podID="0870b34f-2648-451a-a34e-8555e4e4982a" containerID="31e92e1c6198dd2f819c9aa4b96c17c4a29d4d913dd44c573d896f535e538985" exitCode=0 Mar 08 00:28:56 crc kubenswrapper[4762]: I0308 00:28:56.215135 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfbxq" event={"ID":"0870b34f-2648-451a-a34e-8555e4e4982a","Type":"ContainerDied","Data":"31e92e1c6198dd2f819c9aa4b96c17c4a29d4d913dd44c573d896f535e538985"} Mar 08 00:28:57 crc kubenswrapper[4762]: I0308 00:28:57.222669 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-78hq2" event={"ID":"4f3c2509-9848-4e76-96ae-8f815f66d6d7","Type":"ContainerStarted","Data":"64e51ef3f052e0ef30467e7b07c8e809b5bbf424c270ae342ee5a39b40190f90"} Mar 08 00:28:57 crc kubenswrapper[4762]: I0308 00:28:57.228123 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rfbxq" event={"ID":"0870b34f-2648-451a-a34e-8555e4e4982a","Type":"ContainerStarted","Data":"b197a9e272fb0de17ba97040399065769e1a250160cbbd2f128dd4eb8452f82c"} Mar 08 00:28:57 crc kubenswrapper[4762]: I0308 00:28:57.252812 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-78hq2" podStartSLOduration=2.749366805 podStartE2EDuration="5.252793737s" podCreationTimestamp="2026-03-08 00:28:52 +0000 UTC" firstStartedPulling="2026-03-08 00:28:54.195186367 +0000 UTC m=+355.669330711" lastFinishedPulling="2026-03-08 00:28:56.698613289 +0000 UTC m=+358.172757643" observedRunningTime="2026-03-08 00:28:57.244250092 +0000 UTC m=+358.718394436" watchObservedRunningTime="2026-03-08 00:28:57.252793737 +0000 UTC m=+358.726938081" Mar 08 00:28:57 crc kubenswrapper[4762]: I0308 00:28:57.270455 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rfbxq" podStartSLOduration=2.855582366 podStartE2EDuration="5.270439026s" podCreationTimestamp="2026-03-08 00:28:52 +0000 UTC" firstStartedPulling="2026-03-08 00:28:54.189737738 +0000 UTC m=+355.663882132" lastFinishedPulling="2026-03-08 00:28:56.604594418 +0000 UTC m=+358.078738792" observedRunningTime="2026-03-08 00:28:57.268728553 +0000 UTC m=+358.742872897" watchObservedRunningTime="2026-03-08 00:28:57.270439026 +0000 UTC m=+358.744583370" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.096936 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.097524 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.163021 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.317020 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7xslc" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.729466 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:29:00 crc kubenswrapper[4762]: I0308 00:29:00.729568 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:29:01 crc kubenswrapper[4762]: I0308 00:29:01.799730 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8szql" podUID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerName="registry-server" probeResult="failure" output=< Mar 08 00:29:01 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:29:01 crc kubenswrapper[4762]: > Mar 08 00:29:02 crc kubenswrapper[4762]: I0308 00:29:02.610810 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:29:02 crc kubenswrapper[4762]: I0308 00:29:02.610869 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:29:02 crc kubenswrapper[4762]: I0308 00:29:02.662018 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:29:03 crc kubenswrapper[4762]: I0308 00:29:03.105109 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:29:03 crc kubenswrapper[4762]: I0308 00:29:03.105736 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:29:03 crc kubenswrapper[4762]: I0308 00:29:03.177962 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:29:03 crc kubenswrapper[4762]: I0308 00:29:03.335570 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-78hq2" Mar 08 00:29:03 crc kubenswrapper[4762]: I0308 00:29:03.340597 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rfbxq" Mar 08 00:29:10 crc kubenswrapper[4762]: I0308 00:29:10.800325 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:29:10 crc kubenswrapper[4762]: I0308 00:29:10.878541 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8szql" Mar 08 00:29:11 crc kubenswrapper[4762]: I0308 00:29:11.909694 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" Mar 08 00:29:11 crc kubenswrapper[4762]: I0308 00:29:11.975058 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.019609 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" podUID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" containerName="registry" containerID="cri-o://8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae" gracePeriod=30 Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.446635 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.515974 4762 generic.go:334] "Generic (PLEG): container finished" podID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" containerID="8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae" exitCode=0 Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.516028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" event={"ID":"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f","Type":"ContainerDied","Data":"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae"} Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.516065 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" event={"ID":"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f","Type":"ContainerDied","Data":"22780104e914ff35adf752aaca2d1d52e58a49805290ab77e68adb90b75baac0"} Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.516087 4762 scope.go:117] "RemoveContainer" containerID="8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.516115 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-n786p" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.545188 4762 scope.go:117] "RemoveContainer" containerID="8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae" Mar 08 00:29:37 crc kubenswrapper[4762]: E0308 00:29:37.546599 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae\": container with ID starting with 8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae not found: ID does not exist" containerID="8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.546648 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae"} err="failed to get container status \"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae\": rpc error: code = NotFound desc = could not find container \"8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae\": container with ID starting with 8de926cfa25764ec959071aa5afedb7ca1dbb5590ad6977c004cc0517192e2ae not found: ID does not exist" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.634723 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.634832 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwzwz\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.634960 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.635005 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.635341 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.635411 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.635451 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.635504 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted\") pod \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\" (UID: \"97a0d9d8-5b1c-49fb-a54b-6940eada7b0f\") " Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.636607 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.637023 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.644561 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.644661 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.645688 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.652606 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.656098 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz" (OuterVolumeSpecName: "kube-api-access-fwzwz") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "kube-api-access-fwzwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.658605 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" (UID: "97a0d9d8-5b1c-49fb-a54b-6940eada7b0f"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737822 4762 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737872 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737891 4762 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737911 4762 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737935 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwzwz\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-kube-api-access-fwzwz\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737953 4762 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.737970 4762 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.877330 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:29:37 crc kubenswrapper[4762]: I0308 00:29:37.889008 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-n786p"] Mar 08 00:29:39 crc kubenswrapper[4762]: I0308 00:29:39.276146 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" path="/var/lib/kubelet/pods/97a0d9d8-5b1c-49fb-a54b-6940eada7b0f/volumes" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.146575 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548830-h9bv7"] Mar 08 00:30:00 crc kubenswrapper[4762]: E0308 00:30:00.147606 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" containerName="registry" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.147622 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" containerName="registry" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.147734 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a0d9d8-5b1c-49fb-a54b-6940eada7b0f" containerName="registry" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.148136 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.152176 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.152753 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.153081 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.158486 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx"] Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.162744 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.173396 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.173533 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.179406 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548830-h9bv7"] Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.184622 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx"] Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.265438 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.265543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.265670 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7zq\" (UniqueName: \"kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.265708 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5vxd\" (UniqueName: \"kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd\") pod \"auto-csr-approver-29548830-h9bv7\" (UID: \"d1420bea-fcf8-4463-b9a7-95a518acbe56\") " pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.367451 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.367540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7zq\" (UniqueName: \"kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.367568 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5vxd\" (UniqueName: \"kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd\") pod \"auto-csr-approver-29548830-h9bv7\" (UID: \"d1420bea-fcf8-4463-b9a7-95a518acbe56\") " pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.367645 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.369567 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.376432 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.394616 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7zq\" (UniqueName: \"kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq\") pod \"collect-profiles-29548830-8d7wx\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.397910 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5vxd\" (UniqueName: \"kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd\") pod \"auto-csr-approver-29548830-h9bv7\" (UID: \"d1420bea-fcf8-4463-b9a7-95a518acbe56\") " pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.498141 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:00 crc kubenswrapper[4762]: I0308 00:30:00.507598 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.373662 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx"] Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.376870 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548830-h9bv7"] Mar 08 00:30:01 crc kubenswrapper[4762]: W0308 00:30:01.388700 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1420bea_fcf8_4463_b9a7_95a518acbe56.slice/crio-849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6 WatchSource:0}: Error finding container 849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6: Status 404 returned error can't find the container with id 849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6 Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.691832 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" event={"ID":"df7f7a0c-caca-4535-8a12-c6dbca79e550","Type":"ContainerStarted","Data":"a01a097c4a18ad1ce113dfab0995ee03c19cef1f2ef01ee694984dab715e609b"} Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.691927 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" event={"ID":"df7f7a0c-caca-4535-8a12-c6dbca79e550","Type":"ContainerStarted","Data":"9ba02c48f0045867d9efecc819d1202e2d75fd1c514b733cd5fe8f2e20391da1"} Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.694903 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" event={"ID":"d1420bea-fcf8-4463-b9a7-95a518acbe56","Type":"ContainerStarted","Data":"849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6"} Mar 08 00:30:01 crc kubenswrapper[4762]: I0308 00:30:01.737551 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" podStartSLOduration=1.737503293 podStartE2EDuration="1.737503293s" podCreationTimestamp="2026-03-08 00:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:30:01.713667524 +0000 UTC m=+423.187811898" watchObservedRunningTime="2026-03-08 00:30:01.737503293 +0000 UTC m=+423.211647707" Mar 08 00:30:02 crc kubenswrapper[4762]: I0308 00:30:02.712711 4762 generic.go:334] "Generic (PLEG): container finished" podID="df7f7a0c-caca-4535-8a12-c6dbca79e550" containerID="a01a097c4a18ad1ce113dfab0995ee03c19cef1f2ef01ee694984dab715e609b" exitCode=0 Mar 08 00:30:02 crc kubenswrapper[4762]: I0308 00:30:02.712965 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" event={"ID":"df7f7a0c-caca-4535-8a12-c6dbca79e550","Type":"ContainerDied","Data":"a01a097c4a18ad1ce113dfab0995ee03c19cef1f2ef01ee694984dab715e609b"} Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.037594 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.159249 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn7zq\" (UniqueName: \"kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq\") pod \"df7f7a0c-caca-4535-8a12-c6dbca79e550\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.159653 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume\") pod \"df7f7a0c-caca-4535-8a12-c6dbca79e550\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.159701 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume\") pod \"df7f7a0c-caca-4535-8a12-c6dbca79e550\" (UID: \"df7f7a0c-caca-4535-8a12-c6dbca79e550\") " Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.161342 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume" (OuterVolumeSpecName: "config-volume") pod "df7f7a0c-caca-4535-8a12-c6dbca79e550" (UID: "df7f7a0c-caca-4535-8a12-c6dbca79e550"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.169057 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq" (OuterVolumeSpecName: "kube-api-access-vn7zq") pod "df7f7a0c-caca-4535-8a12-c6dbca79e550" (UID: "df7f7a0c-caca-4535-8a12-c6dbca79e550"). InnerVolumeSpecName "kube-api-access-vn7zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.169538 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df7f7a0c-caca-4535-8a12-c6dbca79e550" (UID: "df7f7a0c-caca-4535-8a12-c6dbca79e550"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.260619 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn7zq\" (UniqueName: \"kubernetes.io/projected/df7f7a0c-caca-4535-8a12-c6dbca79e550-kube-api-access-vn7zq\") on node \"crc\" DevicePath \"\"" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.260679 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df7f7a0c-caca-4535-8a12-c6dbca79e550-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.260705 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df7f7a0c-caca-4535-8a12-c6dbca79e550-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.726541 4762 generic.go:334] "Generic (PLEG): container finished" podID="d1420bea-fcf8-4463-b9a7-95a518acbe56" containerID="3b2632051e282cd1b3c7666af9869675a8e92a74f0d236b62407b82d66afbea4" exitCode=0 Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.726615 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" event={"ID":"d1420bea-fcf8-4463-b9a7-95a518acbe56","Type":"ContainerDied","Data":"3b2632051e282cd1b3c7666af9869675a8e92a74f0d236b62407b82d66afbea4"} Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.728137 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" event={"ID":"df7f7a0c-caca-4535-8a12-c6dbca79e550","Type":"ContainerDied","Data":"9ba02c48f0045867d9efecc819d1202e2d75fd1c514b733cd5fe8f2e20391da1"} Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.728159 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ba02c48f0045867d9efecc819d1202e2d75fd1c514b733cd5fe8f2e20391da1" Mar 08 00:30:04 crc kubenswrapper[4762]: I0308 00:30:04.728181 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx" Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.037879 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.185259 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5vxd\" (UniqueName: \"kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd\") pod \"d1420bea-fcf8-4463-b9a7-95a518acbe56\" (UID: \"d1420bea-fcf8-4463-b9a7-95a518acbe56\") " Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.193276 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd" (OuterVolumeSpecName: "kube-api-access-k5vxd") pod "d1420bea-fcf8-4463-b9a7-95a518acbe56" (UID: "d1420bea-fcf8-4463-b9a7-95a518acbe56"). InnerVolumeSpecName "kube-api-access-k5vxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.287742 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5vxd\" (UniqueName: \"kubernetes.io/projected/d1420bea-fcf8-4463-b9a7-95a518acbe56-kube-api-access-k5vxd\") on node \"crc\" DevicePath \"\"" Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.746649 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" event={"ID":"d1420bea-fcf8-4463-b9a7-95a518acbe56","Type":"ContainerDied","Data":"849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6"} Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.746698 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="849a866ea43379b87978b1772b7a6b56dcade59209cc0c754d2947fcc6d518e6" Mar 08 00:30:06 crc kubenswrapper[4762]: I0308 00:30:06.746749 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548830-h9bv7" Mar 08 00:30:42 crc kubenswrapper[4762]: I0308 00:30:42.852058 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:30:42 crc kubenswrapper[4762]: I0308 00:30:42.852970 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:30:59 crc kubenswrapper[4762]: I0308 00:30:59.843814 4762 scope.go:117] "RemoveContainer" containerID="b67bb9155fbb001682129da5a6f1ceed81a1b9830563a4766887c717d2c39532" Mar 08 00:31:12 crc kubenswrapper[4762]: I0308 00:31:12.851407 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:31:12 crc kubenswrapper[4762]: I0308 00:31:12.852046 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:31:42 crc kubenswrapper[4762]: I0308 00:31:42.851937 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:31:42 crc kubenswrapper[4762]: I0308 00:31:42.852700 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:31:42 crc kubenswrapper[4762]: I0308 00:31:42.852808 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:31:42 crc kubenswrapper[4762]: I0308 00:31:42.853747 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:31:42 crc kubenswrapper[4762]: I0308 00:31:42.853877 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521" gracePeriod=600 Mar 08 00:31:43 crc kubenswrapper[4762]: I0308 00:31:43.973365 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521" exitCode=0 Mar 08 00:31:43 crc kubenswrapper[4762]: I0308 00:31:43.973459 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521"} Mar 08 00:31:43 crc kubenswrapper[4762]: I0308 00:31:43.974332 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4"} Mar 08 00:31:43 crc kubenswrapper[4762]: I0308 00:31:43.974370 4762 scope.go:117] "RemoveContainer" containerID="257945ccf73ed75a308d80dc75a5f11ebd89eba7e7970e38512c4bec2dcc8e73" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.146380 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548832-ptz47"] Mar 08 00:32:00 crc kubenswrapper[4762]: E0308 00:32:00.147406 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1420bea-fcf8-4463-b9a7-95a518acbe56" containerName="oc" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.147428 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1420bea-fcf8-4463-b9a7-95a518acbe56" containerName="oc" Mar 08 00:32:00 crc kubenswrapper[4762]: E0308 00:32:00.147452 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7f7a0c-caca-4535-8a12-c6dbca79e550" containerName="collect-profiles" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.147465 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7f7a0c-caca-4535-8a12-c6dbca79e550" containerName="collect-profiles" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.147647 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7f7a0c-caca-4535-8a12-c6dbca79e550" containerName="collect-profiles" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.147679 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1420bea-fcf8-4463-b9a7-95a518acbe56" containerName="oc" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.148237 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.153549 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.154225 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.154367 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.158331 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548832-ptz47"] Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.228575 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrl8k\" (UniqueName: \"kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k\") pod \"auto-csr-approver-29548832-ptz47\" (UID: \"91b5626c-25cc-4cfa-a896-4eb18325572f\") " pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.329878 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrl8k\" (UniqueName: \"kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k\") pod \"auto-csr-approver-29548832-ptz47\" (UID: \"91b5626c-25cc-4cfa-a896-4eb18325572f\") " pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.366498 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrl8k\" (UniqueName: \"kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k\") pod \"auto-csr-approver-29548832-ptz47\" (UID: \"91b5626c-25cc-4cfa-a896-4eb18325572f\") " pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.476853 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.741805 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548832-ptz47"] Mar 08 00:32:00 crc kubenswrapper[4762]: I0308 00:32:00.753835 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:32:01 crc kubenswrapper[4762]: I0308 00:32:01.108376 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548832-ptz47" event={"ID":"91b5626c-25cc-4cfa-a896-4eb18325572f","Type":"ContainerStarted","Data":"3a0146aca3d65489a002f377ee1c5f90682b09ec057d2d9d6e3536d1fe98a4d8"} Mar 08 00:32:03 crc kubenswrapper[4762]: I0308 00:32:03.127738 4762 generic.go:334] "Generic (PLEG): container finished" podID="91b5626c-25cc-4cfa-a896-4eb18325572f" containerID="f20bb89b9022b2a026baa790ada1bfe94c58862630f89a75cc361452fc72a043" exitCode=0 Mar 08 00:32:03 crc kubenswrapper[4762]: I0308 00:32:03.128088 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548832-ptz47" event={"ID":"91b5626c-25cc-4cfa-a896-4eb18325572f","Type":"ContainerDied","Data":"f20bb89b9022b2a026baa790ada1bfe94c58862630f89a75cc361452fc72a043"} Mar 08 00:32:04 crc kubenswrapper[4762]: I0308 00:32:04.479040 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:04 crc kubenswrapper[4762]: I0308 00:32:04.595191 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrl8k\" (UniqueName: \"kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k\") pod \"91b5626c-25cc-4cfa-a896-4eb18325572f\" (UID: \"91b5626c-25cc-4cfa-a896-4eb18325572f\") " Mar 08 00:32:04 crc kubenswrapper[4762]: I0308 00:32:04.606080 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k" (OuterVolumeSpecName: "kube-api-access-mrl8k") pod "91b5626c-25cc-4cfa-a896-4eb18325572f" (UID: "91b5626c-25cc-4cfa-a896-4eb18325572f"). InnerVolumeSpecName "kube-api-access-mrl8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:32:04 crc kubenswrapper[4762]: I0308 00:32:04.696864 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrl8k\" (UniqueName: \"kubernetes.io/projected/91b5626c-25cc-4cfa-a896-4eb18325572f-kube-api-access-mrl8k\") on node \"crc\" DevicePath \"\"" Mar 08 00:32:05 crc kubenswrapper[4762]: I0308 00:32:05.147303 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548832-ptz47" event={"ID":"91b5626c-25cc-4cfa-a896-4eb18325572f","Type":"ContainerDied","Data":"3a0146aca3d65489a002f377ee1c5f90682b09ec057d2d9d6e3536d1fe98a4d8"} Mar 08 00:32:05 crc kubenswrapper[4762]: I0308 00:32:05.147348 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a0146aca3d65489a002f377ee1c5f90682b09ec057d2d9d6e3536d1fe98a4d8" Mar 08 00:32:05 crc kubenswrapper[4762]: I0308 00:32:05.147405 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548832-ptz47" Mar 08 00:32:05 crc kubenswrapper[4762]: I0308 00:32:05.553393 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548826-5sjq2"] Mar 08 00:32:05 crc kubenswrapper[4762]: I0308 00:32:05.562091 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548826-5sjq2"] Mar 08 00:32:07 crc kubenswrapper[4762]: I0308 00:32:07.277692 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93accc2a-5975-4e5f-8927-264224130aca" path="/var/lib/kubelet/pods/93accc2a-5975-4e5f-8927-264224130aca/volumes" Mar 08 00:32:59 crc kubenswrapper[4762]: I0308 00:32:59.909857 4762 scope.go:117] "RemoveContainer" containerID="4e9d10f45a83942d86c72ff918c3b6b3a4773a81038463214e0bc0d25c26146e" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.166832 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548834-9mnrm"] Mar 08 00:34:00 crc kubenswrapper[4762]: E0308 00:34:00.168270 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b5626c-25cc-4cfa-a896-4eb18325572f" containerName="oc" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.168296 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b5626c-25cc-4cfa-a896-4eb18325572f" containerName="oc" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.168529 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b5626c-25cc-4cfa-a896-4eb18325572f" containerName="oc" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.169326 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.177120 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.177252 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.177120 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548834-9mnrm"] Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.182568 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.274318 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kddl4\" (UniqueName: \"kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4\") pod \"auto-csr-approver-29548834-9mnrm\" (UID: \"b177d8e3-9c43-497d-a486-e406692bd63f\") " pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.376984 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kddl4\" (UniqueName: \"kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4\") pod \"auto-csr-approver-29548834-9mnrm\" (UID: \"b177d8e3-9c43-497d-a486-e406692bd63f\") " pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.413694 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kddl4\" (UniqueName: \"kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4\") pod \"auto-csr-approver-29548834-9mnrm\" (UID: \"b177d8e3-9c43-497d-a486-e406692bd63f\") " pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.502032 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:00 crc kubenswrapper[4762]: I0308 00:34:00.802823 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548834-9mnrm"] Mar 08 00:34:00 crc kubenswrapper[4762]: W0308 00:34:00.814727 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb177d8e3_9c43_497d_a486_e406692bd63f.slice/crio-d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791 WatchSource:0}: Error finding container d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791: Status 404 returned error can't find the container with id d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791 Mar 08 00:34:01 crc kubenswrapper[4762]: I0308 00:34:01.030920 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" event={"ID":"b177d8e3-9c43-497d-a486-e406692bd63f","Type":"ContainerStarted","Data":"d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791"} Mar 08 00:34:02 crc kubenswrapper[4762]: I0308 00:34:02.040158 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" event={"ID":"b177d8e3-9c43-497d-a486-e406692bd63f","Type":"ContainerStarted","Data":"d3dd37e3275d998a5e68707a602842d29eebb5b577259e3d232f3dfc129f68b8"} Mar 08 00:34:02 crc kubenswrapper[4762]: I0308 00:34:02.069655 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" podStartSLOduration=1.211035789 podStartE2EDuration="2.069621794s" podCreationTimestamp="2026-03-08 00:34:00 +0000 UTC" firstStartedPulling="2026-03-08 00:34:00.818483049 +0000 UTC m=+662.292627413" lastFinishedPulling="2026-03-08 00:34:01.677069034 +0000 UTC m=+663.151213418" observedRunningTime="2026-03-08 00:34:02.05559108 +0000 UTC m=+663.529735454" watchObservedRunningTime="2026-03-08 00:34:02.069621794 +0000 UTC m=+663.543766178" Mar 08 00:34:03 crc kubenswrapper[4762]: I0308 00:34:03.050093 4762 generic.go:334] "Generic (PLEG): container finished" podID="b177d8e3-9c43-497d-a486-e406692bd63f" containerID="d3dd37e3275d998a5e68707a602842d29eebb5b577259e3d232f3dfc129f68b8" exitCode=0 Mar 08 00:34:03 crc kubenswrapper[4762]: I0308 00:34:03.050242 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" event={"ID":"b177d8e3-9c43-497d-a486-e406692bd63f","Type":"ContainerDied","Data":"d3dd37e3275d998a5e68707a602842d29eebb5b577259e3d232f3dfc129f68b8"} Mar 08 00:34:04 crc kubenswrapper[4762]: I0308 00:34:04.356816 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:04 crc kubenswrapper[4762]: I0308 00:34:04.437948 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kddl4\" (UniqueName: \"kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4\") pod \"b177d8e3-9c43-497d-a486-e406692bd63f\" (UID: \"b177d8e3-9c43-497d-a486-e406692bd63f\") " Mar 08 00:34:04 crc kubenswrapper[4762]: I0308 00:34:04.447992 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4" (OuterVolumeSpecName: "kube-api-access-kddl4") pod "b177d8e3-9c43-497d-a486-e406692bd63f" (UID: "b177d8e3-9c43-497d-a486-e406692bd63f"). InnerVolumeSpecName "kube-api-access-kddl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:34:04 crc kubenswrapper[4762]: I0308 00:34:04.539411 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kddl4\" (UniqueName: \"kubernetes.io/projected/b177d8e3-9c43-497d-a486-e406692bd63f-kube-api-access-kddl4\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.066399 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" event={"ID":"b177d8e3-9c43-497d-a486-e406692bd63f","Type":"ContainerDied","Data":"d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791"} Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.066449 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3fde9b780102d474a75cefcb46f3fb929991d23fbbd386239865c25c1a5a791" Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.066530 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548834-9mnrm" Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.128157 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548828-5t2t6"] Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.134824 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548828-5t2t6"] Mar 08 00:34:05 crc kubenswrapper[4762]: I0308 00:34:05.280340 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638abfba-91f5-4c8d-819b-9940c1dddd1c" path="/var/lib/kubelet/pods/638abfba-91f5-4c8d-819b-9940c1dddd1c/volumes" Mar 08 00:34:12 crc kubenswrapper[4762]: I0308 00:34:12.852116 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:34:12 crc kubenswrapper[4762]: I0308 00:34:12.852490 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.096734 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt"] Mar 08 00:34:14 crc kubenswrapper[4762]: E0308 00:34:14.097068 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b177d8e3-9c43-497d-a486-e406692bd63f" containerName="oc" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.097090 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b177d8e3-9c43-497d-a486-e406692bd63f" containerName="oc" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.097281 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b177d8e3-9c43-497d-a486-e406692bd63f" containerName="oc" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.098619 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.101626 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.115638 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt"] Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.274684 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.274798 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.275046 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gglb\" (UniqueName: \"kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.375988 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.376469 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gglb\" (UniqueName: \"kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.376528 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.376831 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.377228 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.405572 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gglb\" (UniqueName: \"kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.414053 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:14 crc kubenswrapper[4762]: I0308 00:34:14.861409 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt"] Mar 08 00:34:15 crc kubenswrapper[4762]: I0308 00:34:15.146958 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerStarted","Data":"329504f725a35827987b3cebaa7a2ab101019b2d2c7db2078e0e3ed66c86c9b0"} Mar 08 00:34:15 crc kubenswrapper[4762]: I0308 00:34:15.147017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerStarted","Data":"ff71717b39dd2bd95a9735c276a3b00c3585b6e80fb31625728b275f5121f0ce"} Mar 08 00:34:16 crc kubenswrapper[4762]: I0308 00:34:16.157161 4762 generic.go:334] "Generic (PLEG): container finished" podID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerID="329504f725a35827987b3cebaa7a2ab101019b2d2c7db2078e0e3ed66c86c9b0" exitCode=0 Mar 08 00:34:16 crc kubenswrapper[4762]: I0308 00:34:16.157241 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerDied","Data":"329504f725a35827987b3cebaa7a2ab101019b2d2c7db2078e0e3ed66c86c9b0"} Mar 08 00:34:18 crc kubenswrapper[4762]: I0308 00:34:18.181271 4762 generic.go:334] "Generic (PLEG): container finished" podID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerID="89d42280255473cdfc9730f78dcba8d38078a46ffbc1e94ef83e0dceac757897" exitCode=0 Mar 08 00:34:18 crc kubenswrapper[4762]: I0308 00:34:18.181411 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerDied","Data":"89d42280255473cdfc9730f78dcba8d38078a46ffbc1e94ef83e0dceac757897"} Mar 08 00:34:19 crc kubenswrapper[4762]: I0308 00:34:19.194427 4762 generic.go:334] "Generic (PLEG): container finished" podID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerID="a240f843e11d5ecaadee549ea3ef889ec7a904d0c9804ecb84215ff6d933389e" exitCode=0 Mar 08 00:34:19 crc kubenswrapper[4762]: I0308 00:34:19.194622 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerDied","Data":"a240f843e11d5ecaadee549ea3ef889ec7a904d0c9804ecb84215ff6d933389e"} Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.577877 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.771988 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gglb\" (UniqueName: \"kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb\") pod \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.772076 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util\") pod \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.772129 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle\") pod \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\" (UID: \"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95\") " Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.774934 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle" (OuterVolumeSpecName: "bundle") pod "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" (UID: "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.781824 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb" (OuterVolumeSpecName: "kube-api-access-2gglb") pod "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" (UID: "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95"). InnerVolumeSpecName "kube-api-access-2gglb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.789020 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util" (OuterVolumeSpecName: "util") pod "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" (UID: "6cb1cb3e-e36d-4101-ad14-2f03a84bfe95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.873696 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gglb\" (UniqueName: \"kubernetes.io/projected/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-kube-api-access-2gglb\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.874122 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:20 crc kubenswrapper[4762]: I0308 00:34:20.874170 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6cb1cb3e-e36d-4101-ad14-2f03a84bfe95-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:21 crc kubenswrapper[4762]: I0308 00:34:21.217103 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" event={"ID":"6cb1cb3e-e36d-4101-ad14-2f03a84bfe95","Type":"ContainerDied","Data":"ff71717b39dd2bd95a9735c276a3b00c3585b6e80fb31625728b275f5121f0ce"} Mar 08 00:34:21 crc kubenswrapper[4762]: I0308 00:34:21.217183 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff71717b39dd2bd95a9735c276a3b00c3585b6e80fb31625728b275f5121f0ce" Mar 08 00:34:21 crc kubenswrapper[4762]: I0308 00:34:21.217243 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.190293 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hfbrb"] Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191323 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-controller" containerID="cri-o://c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191531 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="sbdb" containerID="cri-o://50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191592 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="nbdb" containerID="cri-o://5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191665 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-acl-logging" containerID="cri-o://7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191652 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-node" containerID="cri-o://49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191801 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="northd" containerID="cri-o://8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.191850 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.234901 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovnkube-controller" containerID="cri-o://776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" gracePeriod=30 Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.478617 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hfbrb_8c6764d8-a35c-4d3f-8b38-1cec1782d9bf/ovn-acl-logging/0.log" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.479201 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hfbrb_8c6764d8-a35c-4d3f-8b38-1cec1782d9bf/ovn-controller/0.log" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.479675 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.553768 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q6mt4"] Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.553974 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-acl-logging" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.553986 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-acl-logging" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.553995 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554002 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554013 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="pull" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554019 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="pull" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554028 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-node" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554033 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-node" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554043 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554051 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554064 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="util" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554070 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="util" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554078 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kubecfg-setup" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554083 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kubecfg-setup" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554091 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="sbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554097 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="sbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554103 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="nbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554108 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="nbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554117 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovnkube-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554122 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovnkube-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554131 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="northd" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554137 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="northd" Mar 08 00:34:25 crc kubenswrapper[4762]: E0308 00:34:25.554145 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="extract" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554151 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="extract" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554232 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554242 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="northd" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554253 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cb1cb3e-e36d-4101-ad14-2f03a84bfe95" containerName="extract" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554262 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-ovn-metrics" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554269 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="nbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554276 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovnkube-controller" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554283 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="ovn-acl-logging" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554291 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="kube-rbac-proxy-node" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.554297 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerName="sbdb" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.556084 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.648511 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.648572 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.648616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxppj\" (UniqueName: \"kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.648701 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.649549 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.649667 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log" (OuterVolumeSpecName: "node-log") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.649915 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650168 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650233 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650256 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650280 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650300 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650322 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650346 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650402 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650371 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket" (OuterVolumeSpecName: "log-socket") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650380 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650448 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650499 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650507 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650523 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650554 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650562 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650572 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650588 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650611 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650692 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650730 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650796 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650802 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650852 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650854 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650882 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash\") pod \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\" (UID: \"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf\") " Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650894 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650915 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650956 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.650998 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash" (OuterVolumeSpecName: "host-slash") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651010 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651074 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-ovn\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651124 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-slash\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651147 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-systemd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651203 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-etc-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651268 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-script-lib\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651362 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651432 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-kubelet\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651497 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-netd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651533 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-env-overrides\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651615 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-systemd-units\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651666 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-log-socket\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651730 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzc6m\" (UniqueName: \"kubernetes.io/projected/e6f987e6-c9d7-410e-9401-492e35771592-kube-api-access-hzc6m\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651827 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-config\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651850 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-node-log\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651891 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-bin\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651969 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-var-lib-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.651990 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6f987e6-c9d7-410e-9401-492e35771592-ovn-node-metrics-cert\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652013 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-netns\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652096 4762 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652108 4762 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652118 4762 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652126 4762 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652136 4762 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652144 4762 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652152 4762 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652160 4762 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652169 4762 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-slash\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652176 4762 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652184 4762 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-node-log\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652192 4762 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652200 4762 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652208 4762 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-log-socket\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652216 4762 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652224 4762 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.652232 4762 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.654831 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj" (OuterVolumeSpecName: "kube-api-access-qxppj") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "kube-api-access-qxppj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.655273 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.669258 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" (UID: "8c6764d8-a35c-4d3f-8b38-1cec1782d9bf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.753946 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzc6m\" (UniqueName: \"kubernetes.io/projected/e6f987e6-c9d7-410e-9401-492e35771592-kube-api-access-hzc6m\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754014 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-config\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754053 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-node-log\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754085 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-bin\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754122 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6f987e6-c9d7-410e-9401-492e35771592-ovn-node-metrics-cert\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754152 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-netns\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754186 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-var-lib-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754237 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754271 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754304 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-ovn\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754345 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-slash\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754376 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-systemd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754405 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-etc-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754436 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-script-lib\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754474 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754517 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-kubelet\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754571 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-netd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754626 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-env-overrides\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754669 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754712 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-systemd-units\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754826 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-config\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754836 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754860 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-run-netns\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754897 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754912 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-netd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754843 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-cni-bin\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754947 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-slash\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754971 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-etc-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754936 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-host-kubelet\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754991 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-ovn\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754959 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-run-systemd\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754953 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-systemd-units\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754820 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-node-log\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754845 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-log-socket\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.754905 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-log-socket\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.755013 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/e6f987e6-c9d7-410e-9401-492e35771592-var-lib-openvswitch\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.755204 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-env-overrides\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.755205 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxppj\" (UniqueName: \"kubernetes.io/projected/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-kube-api-access-qxppj\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.755250 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.755261 4762 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.756051 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/e6f987e6-c9d7-410e-9401-492e35771592-ovnkube-script-lib\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.760217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/e6f987e6-c9d7-410e-9401-492e35771592-ovn-node-metrics-cert\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.782928 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzc6m\" (UniqueName: \"kubernetes.io/projected/e6f987e6-c9d7-410e-9401-492e35771592-kube-api-access-hzc6m\") pod \"ovnkube-node-q6mt4\" (UID: \"e6f987e6-c9d7-410e-9401-492e35771592\") " pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:25 crc kubenswrapper[4762]: I0308 00:34:25.872533 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.253933 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hfbrb_8c6764d8-a35c-4d3f-8b38-1cec1782d9bf/ovn-acl-logging/0.log" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.254820 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hfbrb_8c6764d8-a35c-4d3f-8b38-1cec1782d9bf/ovn-controller/0.log" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255115 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255136 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255145 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255152 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255159 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255166 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255174 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" exitCode=143 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255180 4762 generic.go:334] "Generic (PLEG): container finished" podID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" exitCode=143 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255223 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255242 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255251 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255262 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255272 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255281 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255290 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255301 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255311 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255317 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255324 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255331 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255336 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255342 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255347 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255352 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255358 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255364 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255369 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255375 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255382 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255389 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255395 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255400 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255405 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255410 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255415 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255420 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255425 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255429 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255435 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hfbrb" event={"ID":"8c6764d8-a35c-4d3f-8b38-1cec1782d9bf","Type":"ContainerDied","Data":"89864bf0641f265ad45d3cfec592d05f16033d6dd4f549ff580761f55902ee4f"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255442 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255448 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255452 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255457 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255462 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255466 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255471 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255475 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255480 4762 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.255493 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.258152 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c4plq_c82b8767-5225-48de-aa6f-4668a0c01fcc/kube-multus/0.log" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.258179 4762 generic.go:334] "Generic (PLEG): container finished" podID="c82b8767-5225-48de-aa6f-4668a0c01fcc" containerID="91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73" exitCode=2 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.258210 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c4plq" event={"ID":"c82b8767-5225-48de-aa6f-4668a0c01fcc","Type":"ContainerDied","Data":"91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.258476 4762 scope.go:117] "RemoveContainer" containerID="91685841f1720dd6ca9ec9df2a692c30e4df50d24f251b43e6f4737e3a9d7e73" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.262046 4762 generic.go:334] "Generic (PLEG): container finished" podID="e6f987e6-c9d7-410e-9401-492e35771592" containerID="f8031b26ef8bd62d3f7b2119b257f86af6115811475a39b0e2c4b51c28f32024" exitCode=0 Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.262090 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerDied","Data":"f8031b26ef8bd62d3f7b2119b257f86af6115811475a39b0e2c4b51c28f32024"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.262114 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"f9b7f79fd79a14e32eb6f622923a984ab406418b8fc3cd8838467d6f3ef04b75"} Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.296334 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hfbrb"] Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.304849 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hfbrb"] Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.319145 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.377295 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.399967 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.411752 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.434656 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.456262 4762 scope.go:117] "RemoveContainer" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.475541 4762 scope.go:117] "RemoveContainer" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.527015 4762 scope.go:117] "RemoveContainer" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.570626 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.575170 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.575215 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} err="failed to get container status \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.575242 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.577759 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.577819 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} err="failed to get container status \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.577856 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.587475 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.587519 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} err="failed to get container status \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.587547 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.592167 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.592210 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} err="failed to get container status \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.592246 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.596160 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.596309 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} err="failed to get container status \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.596386 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.599671 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.599712 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} err="failed to get container status \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.599739 4762 scope.go:117] "RemoveContainer" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.603038 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": container with ID starting with 7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2 not found: ID does not exist" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.603083 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} err="failed to get container status \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": rpc error: code = NotFound desc = could not find container \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": container with ID starting with 7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.603114 4762 scope.go:117] "RemoveContainer" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.607024 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": container with ID starting with c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d not found: ID does not exist" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.607076 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} err="failed to get container status \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": rpc error: code = NotFound desc = could not find container \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": container with ID starting with c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.607100 4762 scope.go:117] "RemoveContainer" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: E0308 00:34:26.607382 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": container with ID starting with ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0 not found: ID does not exist" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.607469 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} err="failed to get container status \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": rpc error: code = NotFound desc = could not find container \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": container with ID starting with ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.607541 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608156 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} err="failed to get container status \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608188 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608434 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} err="failed to get container status \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608455 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608651 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} err="failed to get container status \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608669 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608883 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} err="failed to get container status \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.608903 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609094 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} err="failed to get container status \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609112 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609312 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} err="failed to get container status \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609331 4762 scope.go:117] "RemoveContainer" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609622 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} err="failed to get container status \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": rpc error: code = NotFound desc = could not find container \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": container with ID starting with 7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.609640 4762 scope.go:117] "RemoveContainer" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.610887 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} err="failed to get container status \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": rpc error: code = NotFound desc = could not find container \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": container with ID starting with c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.610907 4762 scope.go:117] "RemoveContainer" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.614998 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} err="failed to get container status \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": rpc error: code = NotFound desc = could not find container \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": container with ID starting with ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.615037 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.615316 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} err="failed to get container status \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.615362 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.615644 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} err="failed to get container status \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.615723 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.616018 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} err="failed to get container status \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.616037 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.616766 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} err="failed to get container status \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.616816 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617038 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} err="failed to get container status \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617064 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617271 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} err="failed to get container status \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617293 4762 scope.go:117] "RemoveContainer" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617509 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} err="failed to get container status \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": rpc error: code = NotFound desc = could not find container \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": container with ID starting with 7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617525 4762 scope.go:117] "RemoveContainer" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617690 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} err="failed to get container status \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": rpc error: code = NotFound desc = could not find container \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": container with ID starting with c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617708 4762 scope.go:117] "RemoveContainer" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617873 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} err="failed to get container status \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": rpc error: code = NotFound desc = could not find container \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": container with ID starting with ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.617890 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.618691 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} err="failed to get container status \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.618709 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619443 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} err="failed to get container status \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619461 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619705 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} err="failed to get container status \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619729 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619957 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} err="failed to get container status \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.619984 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620192 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} err="failed to get container status \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620219 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620408 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} err="failed to get container status \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620429 4762 scope.go:117] "RemoveContainer" containerID="7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620699 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2"} err="failed to get container status \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": rpc error: code = NotFound desc = could not find container \"7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2\": container with ID starting with 7b0a883c914359d1730a98b42896f219d61e0ad4f8c10b9bd161d868921e76d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620717 4762 scope.go:117] "RemoveContainer" containerID="c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620932 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d"} err="failed to get container status \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": rpc error: code = NotFound desc = could not find container \"c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d\": container with ID starting with c80444610a8bae0e02a4666155ea34c4611d669ae5ee5188ff5e3bc790462a8d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.620953 4762 scope.go:117] "RemoveContainer" containerID="ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621125 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0"} err="failed to get container status \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": rpc error: code = NotFound desc = could not find container \"ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0\": container with ID starting with ceab4dbb613169ffbb8062cf1f4b13c86e8ad9f363a8f429907ab0daa60720e0 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621143 4762 scope.go:117] "RemoveContainer" containerID="776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621330 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2"} err="failed to get container status \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": rpc error: code = NotFound desc = could not find container \"776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2\": container with ID starting with 776403b3caca38be9f32a3a0ff235d4f6c0c3ee0c902858b1fa37f64625389d2 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621351 4762 scope.go:117] "RemoveContainer" containerID="50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621529 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d"} err="failed to get container status \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": rpc error: code = NotFound desc = could not find container \"50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d\": container with ID starting with 50b920d3664b500d08752e2dcff18da1095946a06437751cc715c4aac635498d not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621548 4762 scope.go:117] "RemoveContainer" containerID="5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621735 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a"} err="failed to get container status \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": rpc error: code = NotFound desc = could not find container \"5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a\": container with ID starting with 5b6f69bd42be12f1aea668bc1e11a72ddb209de7b4ab27246e8baafc329c9f4a not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621757 4762 scope.go:117] "RemoveContainer" containerID="8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621957 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c"} err="failed to get container status \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": rpc error: code = NotFound desc = could not find container \"8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c\": container with ID starting with 8137ee96ec5b18e3e0a515412a21a7bd2c2e11292ba9e6ab61d03ca0c5c7711c not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.621977 4762 scope.go:117] "RemoveContainer" containerID="897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.622191 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6"} err="failed to get container status \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": rpc error: code = NotFound desc = could not find container \"897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6\": container with ID starting with 897e1c6d4aeefb9e401a7e9267ef2741a68c3a187a3f0f136ee9bb19fd4418f6 not found: ID does not exist" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.622265 4762 scope.go:117] "RemoveContainer" containerID="49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84" Mar 08 00:34:26 crc kubenswrapper[4762]: I0308 00:34:26.622558 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84"} err="failed to get container status \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": rpc error: code = NotFound desc = could not find container \"49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84\": container with ID starting with 49c6c055266afb46c3d859c25331597ccbd31b63909e2640cb50c6513e551e84 not found: ID does not exist" Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.276907 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c6764d8-a35c-4d3f-8b38-1cec1782d9bf" path="/var/lib/kubelet/pods/8c6764d8-a35c-4d3f-8b38-1cec1782d9bf/volumes" Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.363636 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-c4plq_c82b8767-5225-48de-aa6f-4668a0c01fcc/kube-multus/0.log" Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.363743 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-c4plq" event={"ID":"c82b8767-5225-48de-aa6f-4668a0c01fcc","Type":"ContainerStarted","Data":"b590183310a2cbb1536138fc4d5f0332f5c2a44b2c1ca20e74da7ffc0278e9a6"} Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.367902 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"74d95e71335b4122b82571d7101aaa437f8a204be404f83dc5569ad8c9556984"} Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.367929 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"cc510b8a22e7e5cf1016f8f62159f06b9e3631f3d3cd9e4623d79245bf18072f"} Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.367940 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"e46596badf88eeb97b83638b76dcd618fa6b807a0af45759b9633857817aedef"} Mar 08 00:34:27 crc kubenswrapper[4762]: I0308 00:34:27.367952 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"c229dc5e0f9c080a893bb943fef72d135279fd6e5a13d661f6299c3629648c8f"} Mar 08 00:34:28 crc kubenswrapper[4762]: I0308 00:34:28.380303 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"436710d12204204629fbc80f85ec2c7eeabffeeef5f9afd79080efededca3dd4"} Mar 08 00:34:28 crc kubenswrapper[4762]: I0308 00:34:28.380817 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"8f5b882bcc86681c03c5058d140bed7990c4b78ca4500af797c2a9c980df0d27"} Mar 08 00:34:30 crc kubenswrapper[4762]: I0308 00:34:30.395371 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"942b02ff0c878eaeff9a4fca989af0b2d36a9437ca5ff89f79d280e431c53537"} Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.930604 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc"] Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.931908 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.933747 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.934419 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-2fmq2" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.936134 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.977098 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg"] Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.977848 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.979469 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-72v5v" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.979674 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.998104 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n"] Mar 08 00:34:31 crc kubenswrapper[4762]: I0308 00:34:31.998946 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.040175 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.040508 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.040549 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.040664 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.040834 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5fmw\" (UniqueName: \"kubernetes.io/projected/9f4ae992-28ff-440b-885f-2b01a62887d1-kube-api-access-c5fmw\") pod \"obo-prometheus-operator-68bc856cb9-5cnvc\" (UID: \"9f4ae992-28ff-440b-885f-2b01a62887d1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.142382 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5fmw\" (UniqueName: \"kubernetes.io/projected/9f4ae992-28ff-440b-885f-2b01a62887d1-kube-api-access-c5fmw\") pod \"obo-prometheus-operator-68bc856cb9-5cnvc\" (UID: \"9f4ae992-28ff-440b-885f-2b01a62887d1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.142445 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.142470 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.142504 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.142524 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.146513 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.147028 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.150177 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d6adc3f-581b-489b-9bbd-dbc4e93c54f1-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg\" (UID: \"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.152305 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f0e56a85-8dc3-4b03-9dc5-c9cce7682162-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n\" (UID: \"f0e56a85-8dc3-4b03-9dc5-c9cce7682162\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.170043 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jr6wh"] Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.170683 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.173274 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.173471 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-674tc" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.181394 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5fmw\" (UniqueName: \"kubernetes.io/projected/9f4ae992-28ff-440b-885f-2b01a62887d1-kube-api-access-c5fmw\") pod \"obo-prometheus-operator-68bc856cb9-5cnvc\" (UID: \"9f4ae992-28ff-440b-885f-2b01a62887d1\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.244352 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/977085a1-8184-4c52-8e8d-6cb64635e335-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.244597 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g78bw\" (UniqueName: \"kubernetes.io/projected/977085a1-8184-4c52-8e8d-6cb64635e335-kube-api-access-g78bw\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.247933 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.271013 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(7447fbc8d9653559c586d328a0b3b7332c1c5580e70ea0f4c71e0c5e7a68daab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.271115 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(7447fbc8d9653559c586d328a0b3b7332c1c5580e70ea0f4c71e0c5e7a68daab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.271143 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(7447fbc8d9653559c586d328a0b3b7332c1c5580e70ea0f4c71e0c5e7a68daab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.271208 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators(9f4ae992-28ff-440b-885f-2b01a62887d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators(9f4ae992-28ff-440b-885f-2b01a62887d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(7447fbc8d9653559c586d328a0b3b7332c1c5580e70ea0f4c71e0c5e7a68daab): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" podUID="9f4ae992-28ff-440b-885f-2b01a62887d1" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.290583 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.310737 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.315156 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(5750d6e43b3a71d217d99395b06f0dca9bc3efb539565eb8a991898e747b34c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.315226 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(5750d6e43b3a71d217d99395b06f0dca9bc3efb539565eb8a991898e747b34c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.315253 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(5750d6e43b3a71d217d99395b06f0dca9bc3efb539565eb8a991898e747b34c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.315308 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators(3d6adc3f-581b-489b-9bbd-dbc4e93c54f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators(3d6adc3f-581b-489b-9bbd-dbc4e93c54f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(5750d6e43b3a71d217d99395b06f0dca9bc3efb539565eb8a991898e747b34c4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" podUID="3d6adc3f-581b-489b-9bbd-dbc4e93c54f1" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.343511 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(a4f15de7a649796896a37bbd2a4594e8c28ecc703f94408d346d40b2ea1e1cc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.343615 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(a4f15de7a649796896a37bbd2a4594e8c28ecc703f94408d346d40b2ea1e1cc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.343644 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(a4f15de7a649796896a37bbd2a4594e8c28ecc703f94408d346d40b2ea1e1cc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.343716 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators(f0e56a85-8dc3-4b03-9dc5-c9cce7682162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators(f0e56a85-8dc3-4b03-9dc5-c9cce7682162)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(a4f15de7a649796896a37bbd2a4594e8c28ecc703f94408d346d40b2ea1e1cc6): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" podUID="f0e56a85-8dc3-4b03-9dc5-c9cce7682162" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.346706 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g78bw\" (UniqueName: \"kubernetes.io/projected/977085a1-8184-4c52-8e8d-6cb64635e335-kube-api-access-g78bw\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.346854 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/977085a1-8184-4c52-8e8d-6cb64635e335-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.356370 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/977085a1-8184-4c52-8e8d-6cb64635e335-observability-operator-tls\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.356444 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9ntmw"] Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.357139 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.360342 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xvvtn" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.401627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g78bw\" (UniqueName: \"kubernetes.io/projected/977085a1-8184-4c52-8e8d-6cb64635e335-kube-api-access-g78bw\") pod \"observability-operator-59bdc8b94-jr6wh\" (UID: \"977085a1-8184-4c52-8e8d-6cb64635e335\") " pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.409740 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" event={"ID":"e6f987e6-c9d7-410e-9401-492e35771592","Type":"ContainerStarted","Data":"4c928e8de83df23fe8744e5a15573935115a5a172d3a277f15ae86774b466fc1"} Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.410754 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.410925 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.411126 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.445269 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.445334 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.447981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnvts\" (UniqueName: \"kubernetes.io/projected/3082ab77-d932-4350-915b-43172392ba8e-kube-api-access-bnvts\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.448068 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3082ab77-d932-4350-915b-43172392ba8e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.466348 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" podStartSLOduration=7.466331934 podStartE2EDuration="7.466331934s" podCreationTimestamp="2026-03-08 00:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:34:32.464444275 +0000 UTC m=+693.938588619" watchObservedRunningTime="2026-03-08 00:34:32.466331934 +0000 UTC m=+693.940476278" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.496251 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.520310 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(268d38353ababdab750f8fccdcaef98623abbd74c9750680f4f19a14788f2155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.520366 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(268d38353ababdab750f8fccdcaef98623abbd74c9750680f4f19a14788f2155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.520387 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(268d38353ababdab750f8fccdcaef98623abbd74c9750680f4f19a14788f2155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.520435 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-jr6wh_openshift-operators(977085a1-8184-4c52-8e8d-6cb64635e335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-jr6wh_openshift-operators(977085a1-8184-4c52-8e8d-6cb64635e335)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(268d38353ababdab750f8fccdcaef98623abbd74c9750680f4f19a14788f2155): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.549008 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3082ab77-d932-4350-915b-43172392ba8e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.549123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnvts\" (UniqueName: \"kubernetes.io/projected/3082ab77-d932-4350-915b-43172392ba8e-kube-api-access-bnvts\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.550012 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/3082ab77-d932-4350-915b-43172392ba8e-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.566497 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnvts\" (UniqueName: \"kubernetes.io/projected/3082ab77-d932-4350-915b-43172392ba8e-kube-api-access-bnvts\") pod \"perses-operator-5bf474d74f-9ntmw\" (UID: \"3082ab77-d932-4350-915b-43172392ba8e\") " pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: I0308 00:34:32.711491 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.732122 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(d4f92e193e7766ed05d6c5e8cdcfdc0f1e74c5e5b03b05eb918d0b2e3b76d28c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.732260 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(d4f92e193e7766ed05d6c5e8cdcfdc0f1e74c5e5b03b05eb918d0b2e3b76d28c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.732358 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(d4f92e193e7766ed05d6c5e8cdcfdc0f1e74c5e5b03b05eb918d0b2e3b76d28c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:32 crc kubenswrapper[4762]: E0308 00:34:32.732470 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9ntmw_openshift-operators(3082ab77-d932-4350-915b-43172392ba8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9ntmw_openshift-operators(3082ab77-d932-4350-915b-43172392ba8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(d4f92e193e7766ed05d6c5e8cdcfdc0f1e74c5e5b03b05eb918d0b2e3b76d28c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podUID="3082ab77-d932-4350-915b-43172392ba8e" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.924642 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc"] Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.925131 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.925548 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.929219 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n"] Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.929316 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.929710 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.959892 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jr6wh"] Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.960018 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.960697 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:33 crc kubenswrapper[4762]: E0308 00:34:33.983004 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(354c41b3e8b22c2755fb23fa910746cd68899828b2e367faf81e677f9e39fc6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:33 crc kubenswrapper[4762]: E0308 00:34:33.983224 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(354c41b3e8b22c2755fb23fa910746cd68899828b2e367faf81e677f9e39fc6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:33 crc kubenswrapper[4762]: E0308 00:34:33.983331 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(354c41b3e8b22c2755fb23fa910746cd68899828b2e367faf81e677f9e39fc6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:33 crc kubenswrapper[4762]: E0308 00:34:33.983461 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators(9f4ae992-28ff-440b-885f-2b01a62887d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators(9f4ae992-28ff-440b-885f-2b01a62887d1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-5cnvc_openshift-operators_9f4ae992-28ff-440b-885f-2b01a62887d1_0(354c41b3e8b22c2755fb23fa910746cd68899828b2e367faf81e677f9e39fc6d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" podUID="9f4ae992-28ff-440b-885f-2b01a62887d1" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.991945 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg"] Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.992078 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:33 crc kubenswrapper[4762]: I0308 00:34:33.993236 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:34 crc kubenswrapper[4762]: I0308 00:34:34.029172 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9ntmw"] Mar 08 00:34:34 crc kubenswrapper[4762]: I0308 00:34:34.029312 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:34 crc kubenswrapper[4762]: I0308 00:34:34.029837 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.041749 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(f63b1755a46eac54a442bdcf0bd9df5637cc6d29ab158fe6638a060a760bf154): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.041887 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(f63b1755a46eac54a442bdcf0bd9df5637cc6d29ab158fe6638a060a760bf154): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.041908 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(f63b1755a46eac54a442bdcf0bd9df5637cc6d29ab158fe6638a060a760bf154): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.041954 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators(f0e56a85-8dc3-4b03-9dc5-c9cce7682162)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators(f0e56a85-8dc3-4b03-9dc5-c9cce7682162)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_openshift-operators_f0e56a85-8dc3-4b03-9dc5-c9cce7682162_0(f63b1755a46eac54a442bdcf0bd9df5637cc6d29ab158fe6638a060a760bf154): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" podUID="f0e56a85-8dc3-4b03-9dc5-c9cce7682162" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.047941 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(d79fe69563804965ddd3623db0cebcafb947b514b90bc742efe186b75daa27e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.048003 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(d79fe69563804965ddd3623db0cebcafb947b514b90bc742efe186b75daa27e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.048024 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(d79fe69563804965ddd3623db0cebcafb947b514b90bc742efe186b75daa27e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.048073 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators(3d6adc3f-581b-489b-9bbd-dbc4e93c54f1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators(3d6adc3f-581b-489b-9bbd-dbc4e93c54f1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_openshift-operators_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1_0(d79fe69563804965ddd3623db0cebcafb947b514b90bc742efe186b75daa27e5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" podUID="3d6adc3f-581b-489b-9bbd-dbc4e93c54f1" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.052083 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(f3876f3535fd7bb5e9643536a6408a9c6e40f618905cf880128a1b34d0344c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.052175 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(f3876f3535fd7bb5e9643536a6408a9c6e40f618905cf880128a1b34d0344c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.052191 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(f3876f3535fd7bb5e9643536a6408a9c6e40f618905cf880128a1b34d0344c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.052220 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-jr6wh_openshift-operators(977085a1-8184-4c52-8e8d-6cb64635e335)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-jr6wh_openshift-operators(977085a1-8184-4c52-8e8d-6cb64635e335)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-jr6wh_openshift-operators_977085a1-8184-4c52-8e8d-6cb64635e335_0(f3876f3535fd7bb5e9643536a6408a9c6e40f618905cf880128a1b34d0344c5a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.078486 4762 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(881a5d773dd0ca12453a4913c3aedb455725a38e686e8a6681102020c87ef5fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.078591 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(881a5d773dd0ca12453a4913c3aedb455725a38e686e8a6681102020c87ef5fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.078618 4762 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(881a5d773dd0ca12453a4913c3aedb455725a38e686e8a6681102020c87ef5fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:34 crc kubenswrapper[4762]: E0308 00:34:34.078681 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-9ntmw_openshift-operators(3082ab77-d932-4350-915b-43172392ba8e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-9ntmw_openshift-operators(3082ab77-d932-4350-915b-43172392ba8e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-9ntmw_openshift-operators_3082ab77-d932-4350-915b-43172392ba8e_0(881a5d773dd0ca12453a4913c3aedb455725a38e686e8a6681102020c87ef5fe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podUID="3082ab77-d932-4350-915b-43172392ba8e" Mar 08 00:34:42 crc kubenswrapper[4762]: I0308 00:34:42.851614 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:34:42 crc kubenswrapper[4762]: I0308 00:34:42.853900 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:34:44 crc kubenswrapper[4762]: I0308 00:34:44.262700 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:44 crc kubenswrapper[4762]: I0308 00:34:44.263391 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:44 crc kubenswrapper[4762]: I0308 00:34:44.568784 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-jr6wh"] Mar 08 00:34:45 crc kubenswrapper[4762]: I0308 00:34:45.484968 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" event={"ID":"977085a1-8184-4c52-8e8d-6cb64635e335","Type":"ContainerStarted","Data":"6b63bdf9eeb20a69e2cf5b148189746d2c9456be6b92e31f1c4b5eb1f8a6df44"} Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.262952 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.263684 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.264193 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.264375 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.551251 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9ntmw"] Mar 08 00:34:46 crc kubenswrapper[4762]: W0308 00:34:46.569008 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3082ab77_d932_4350_915b_43172392ba8e.slice/crio-f04353ea27835411398ec897ca76743f93461337730bffa4f7bf4e07eeb8c796 WatchSource:0}: Error finding container f04353ea27835411398ec897ca76743f93461337730bffa4f7bf4e07eeb8c796: Status 404 returned error can't find the container with id f04353ea27835411398ec897ca76743f93461337730bffa4f7bf4e07eeb8c796 Mar 08 00:34:46 crc kubenswrapper[4762]: I0308 00:34:46.637148 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc"] Mar 08 00:34:46 crc kubenswrapper[4762]: W0308 00:34:46.668335 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f4ae992_28ff_440b_885f_2b01a62887d1.slice/crio-5ea26ac37140dd0737de1f1b7b0d183d9a40c3016178cb8c998804f080686d66 WatchSource:0}: Error finding container 5ea26ac37140dd0737de1f1b7b0d183d9a40c3016178cb8c998804f080686d66: Status 404 returned error can't find the container with id 5ea26ac37140dd0737de1f1b7b0d183d9a40c3016178cb8c998804f080686d66 Mar 08 00:34:47 crc kubenswrapper[4762]: I0308 00:34:47.262276 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:47 crc kubenswrapper[4762]: I0308 00:34:47.262999 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" Mar 08 00:34:47 crc kubenswrapper[4762]: I0308 00:34:47.523290 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" event={"ID":"9f4ae992-28ff-440b-885f-2b01a62887d1","Type":"ContainerStarted","Data":"5ea26ac37140dd0737de1f1b7b0d183d9a40c3016178cb8c998804f080686d66"} Mar 08 00:34:47 crc kubenswrapper[4762]: I0308 00:34:47.524453 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" event={"ID":"3082ab77-d932-4350-915b-43172392ba8e","Type":"ContainerStarted","Data":"f04353ea27835411398ec897ca76743f93461337730bffa4f7bf4e07eeb8c796"} Mar 08 00:34:48 crc kubenswrapper[4762]: I0308 00:34:48.940425 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg"] Mar 08 00:34:49 crc kubenswrapper[4762]: I0308 00:34:49.262874 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:49 crc kubenswrapper[4762]: I0308 00:34:49.274533 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" Mar 08 00:34:52 crc kubenswrapper[4762]: I0308 00:34:52.571245 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" event={"ID":"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1","Type":"ContainerStarted","Data":"f9fad0fe97db81f5beb63c2128e617bc184b59fd48b9c16759eb51a0519c2700"} Mar 08 00:34:53 crc kubenswrapper[4762]: I0308 00:34:53.884216 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n"] Mar 08 00:34:53 crc kubenswrapper[4762]: W0308 00:34:53.899670 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e56a85_8dc3_4b03_9dc5_c9cce7682162.slice/crio-fe4c292e690d7d721d688f8a66d3aabfaa89bf6268557cf0c0b88b01f9ff48c4 WatchSource:0}: Error finding container fe4c292e690d7d721d688f8a66d3aabfaa89bf6268557cf0c0b88b01f9ff48c4: Status 404 returned error can't find the container with id fe4c292e690d7d721d688f8a66d3aabfaa89bf6268557cf0c0b88b01f9ff48c4 Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.582552 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" event={"ID":"3082ab77-d932-4350-915b-43172392ba8e","Type":"ContainerStarted","Data":"c43644e14d156bebd2daca5feeb9e5cb03778024f5063c62e7cb750cbbf2e7cd"} Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.582658 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.583411 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" event={"ID":"f0e56a85-8dc3-4b03-9dc5-c9cce7682162","Type":"ContainerStarted","Data":"fe4c292e690d7d721d688f8a66d3aabfaa89bf6268557cf0c0b88b01f9ff48c4"} Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.585087 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" event={"ID":"977085a1-8184-4c52-8e8d-6cb64635e335","Type":"ContainerStarted","Data":"c80d66e8e7e5668f7c00d99192f18daca4c57debfeb8a489761d0dd3d149108a"} Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.585297 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.588083 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.588361 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" event={"ID":"9f4ae992-28ff-440b-885f-2b01a62887d1","Type":"ContainerStarted","Data":"65f6b9f862a833e01bfe76f27bcd5c39140b4a072186e6ee007a581771433136"} Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.607179 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podStartSLOduration=15.581320434 podStartE2EDuration="22.607153706s" podCreationTimestamp="2026-03-08 00:34:32 +0000 UTC" firstStartedPulling="2026-03-08 00:34:46.591304406 +0000 UTC m=+708.065448750" lastFinishedPulling="2026-03-08 00:34:53.617137678 +0000 UTC m=+715.091282022" observedRunningTime="2026-03-08 00:34:54.602633547 +0000 UTC m=+716.076777891" watchObservedRunningTime="2026-03-08 00:34:54.607153706 +0000 UTC m=+716.081298080" Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.627190 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podStartSLOduration=13.533937316 podStartE2EDuration="22.627164607s" podCreationTimestamp="2026-03-08 00:34:32 +0000 UTC" firstStartedPulling="2026-03-08 00:34:44.584400758 +0000 UTC m=+706.058545102" lastFinishedPulling="2026-03-08 00:34:53.677628049 +0000 UTC m=+715.151772393" observedRunningTime="2026-03-08 00:34:54.620189308 +0000 UTC m=+716.094333662" watchObservedRunningTime="2026-03-08 00:34:54.627164607 +0000 UTC m=+716.101308971" Mar 08 00:34:54 crc kubenswrapper[4762]: I0308 00:34:54.654550 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-5cnvc" podStartSLOduration=16.687512925 podStartE2EDuration="23.654530648s" podCreationTimestamp="2026-03-08 00:34:31 +0000 UTC" firstStartedPulling="2026-03-08 00:34:46.67871916 +0000 UTC m=+708.152863504" lastFinishedPulling="2026-03-08 00:34:53.645736883 +0000 UTC m=+715.119881227" observedRunningTime="2026-03-08 00:34:54.644952555 +0000 UTC m=+716.119096919" watchObservedRunningTime="2026-03-08 00:34:54.654530648 +0000 UTC m=+716.128674992" Mar 08 00:34:55 crc kubenswrapper[4762]: I0308 00:34:55.596974 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" event={"ID":"f0e56a85-8dc3-4b03-9dc5-c9cce7682162","Type":"ContainerStarted","Data":"0d8d7ec9b2d19014a60d9de9b85d6ae8ec86bcefa3a3bf564dfe2225e416a099"} Mar 08 00:34:55 crc kubenswrapper[4762]: I0308 00:34:55.599066 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" event={"ID":"3d6adc3f-581b-489b-9bbd-dbc4e93c54f1","Type":"ContainerStarted","Data":"a89b188a3fa3b15f4398b61daafc8fa34ad8a53736984fc6c14122af67529729"} Mar 08 00:34:55 crc kubenswrapper[4762]: I0308 00:34:55.613907 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-8vm8n" podStartSLOduration=23.320595276 podStartE2EDuration="24.613882929s" podCreationTimestamp="2026-03-08 00:34:31 +0000 UTC" firstStartedPulling="2026-03-08 00:34:53.904850056 +0000 UTC m=+715.378994400" lastFinishedPulling="2026-03-08 00:34:55.198137709 +0000 UTC m=+716.672282053" observedRunningTime="2026-03-08 00:34:55.613826829 +0000 UTC m=+717.087971213" watchObservedRunningTime="2026-03-08 00:34:55.613882929 +0000 UTC m=+717.088027273" Mar 08 00:34:55 crc kubenswrapper[4762]: I0308 00:34:55.648201 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-75b9887994-rp4fg" podStartSLOduration=21.993859482 podStartE2EDuration="24.648175289s" podCreationTimestamp="2026-03-08 00:34:31 +0000 UTC" firstStartedPulling="2026-03-08 00:34:52.551780259 +0000 UTC m=+714.025924603" lastFinishedPulling="2026-03-08 00:34:55.206096066 +0000 UTC m=+716.680240410" observedRunningTime="2026-03-08 00:34:55.647318105 +0000 UTC m=+717.121462489" watchObservedRunningTime="2026-03-08 00:34:55.648175289 +0000 UTC m=+717.122319673" Mar 08 00:34:55 crc kubenswrapper[4762]: I0308 00:34:55.912448 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" Mar 08 00:34:59 crc kubenswrapper[4762]: I0308 00:34:59.996478 4762 scope.go:117] "RemoveContainer" containerID="04aefecf14b583bb4d35b3b92c22dfb189479db936045c79892a84465ab36fa4" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.705409 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fbthd"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.706562 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.709182 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.709428 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.711859 4762 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zw8kp" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.717267 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fbthd"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.722364 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-zwr24"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.723454 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-zwr24" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.723540 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.726612 4762 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2dbhq" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.728106 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pp92"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.728989 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.732064 4762 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-t8tvr" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.743336 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-zwr24"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.753838 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pp92"] Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.792990 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mghbl\" (UniqueName: \"kubernetes.io/projected/604f6908-a2e3-47b3-82af-b2dd6dc5dde2-kube-api-access-mghbl\") pod \"cert-manager-cainjector-cf98fcc89-fbthd\" (UID: \"604f6908-a2e3-47b3-82af-b2dd6dc5dde2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.793051 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrhnb\" (UniqueName: \"kubernetes.io/projected/f5da9b45-f4fb-4271-b27c-0d3e6251513c-kube-api-access-hrhnb\") pod \"cert-manager-858654f9db-zwr24\" (UID: \"f5da9b45-f4fb-4271-b27c-0d3e6251513c\") " pod="cert-manager/cert-manager-858654f9db-zwr24" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.793084 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m92x2\" (UniqueName: \"kubernetes.io/projected/9056b43f-9cc2-446b-a516-04ba97bf2fd0-kube-api-access-m92x2\") pod \"cert-manager-webhook-687f57d79b-8pp92\" (UID: \"9056b43f-9cc2-446b-a516-04ba97bf2fd0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.894385 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrhnb\" (UniqueName: \"kubernetes.io/projected/f5da9b45-f4fb-4271-b27c-0d3e6251513c-kube-api-access-hrhnb\") pod \"cert-manager-858654f9db-zwr24\" (UID: \"f5da9b45-f4fb-4271-b27c-0d3e6251513c\") " pod="cert-manager/cert-manager-858654f9db-zwr24" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.894453 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m92x2\" (UniqueName: \"kubernetes.io/projected/9056b43f-9cc2-446b-a516-04ba97bf2fd0-kube-api-access-m92x2\") pod \"cert-manager-webhook-687f57d79b-8pp92\" (UID: \"9056b43f-9cc2-446b-a516-04ba97bf2fd0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.894519 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mghbl\" (UniqueName: \"kubernetes.io/projected/604f6908-a2e3-47b3-82af-b2dd6dc5dde2-kube-api-access-mghbl\") pod \"cert-manager-cainjector-cf98fcc89-fbthd\" (UID: \"604f6908-a2e3-47b3-82af-b2dd6dc5dde2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.916520 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mghbl\" (UniqueName: \"kubernetes.io/projected/604f6908-a2e3-47b3-82af-b2dd6dc5dde2-kube-api-access-mghbl\") pod \"cert-manager-cainjector-cf98fcc89-fbthd\" (UID: \"604f6908-a2e3-47b3-82af-b2dd6dc5dde2\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.918297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrhnb\" (UniqueName: \"kubernetes.io/projected/f5da9b45-f4fb-4271-b27c-0d3e6251513c-kube-api-access-hrhnb\") pod \"cert-manager-858654f9db-zwr24\" (UID: \"f5da9b45-f4fb-4271-b27c-0d3e6251513c\") " pod="cert-manager/cert-manager-858654f9db-zwr24" Mar 08 00:35:02 crc kubenswrapper[4762]: I0308 00:35:02.922188 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m92x2\" (UniqueName: \"kubernetes.io/projected/9056b43f-9cc2-446b-a516-04ba97bf2fd0-kube-api-access-m92x2\") pod \"cert-manager-webhook-687f57d79b-8pp92\" (UID: \"9056b43f-9cc2-446b-a516-04ba97bf2fd0\") " pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.037863 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.056957 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-zwr24" Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.065486 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.340438 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-zwr24"] Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.430800 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-fbthd"] Mar 08 00:35:03 crc kubenswrapper[4762]: W0308 00:35:03.468286 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9056b43f_9cc2_446b_a516_04ba97bf2fd0.slice/crio-29e8ca3e3b18399a7a218bcc55c20ad53e7fbd9e2ff1a875339c4c79b6281f15 WatchSource:0}: Error finding container 29e8ca3e3b18399a7a218bcc55c20ad53e7fbd9e2ff1a875339c4c79b6281f15: Status 404 returned error can't find the container with id 29e8ca3e3b18399a7a218bcc55c20ad53e7fbd9e2ff1a875339c4c79b6281f15 Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.468915 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-8pp92"] Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.656932 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" event={"ID":"604f6908-a2e3-47b3-82af-b2dd6dc5dde2","Type":"ContainerStarted","Data":"01919125d9c2489284ef926fa82add68a4bbee7ae0ddb812939f1e1c8778ae22"} Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.658750 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" event={"ID":"9056b43f-9cc2-446b-a516-04ba97bf2fd0","Type":"ContainerStarted","Data":"29e8ca3e3b18399a7a218bcc55c20ad53e7fbd9e2ff1a875339c4c79b6281f15"} Mar 08 00:35:03 crc kubenswrapper[4762]: I0308 00:35:03.660834 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-zwr24" event={"ID":"f5da9b45-f4fb-4271-b27c-0d3e6251513c","Type":"ContainerStarted","Data":"9c97b386d83803fa468c8c729e89cdc9e524a15f83f5adb17e31e88d135bf651"} Mar 08 00:35:07 crc kubenswrapper[4762]: I0308 00:35:07.958527 4762 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.723710 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-zwr24" event={"ID":"f5da9b45-f4fb-4271-b27c-0d3e6251513c","Type":"ContainerStarted","Data":"5d736eb19384f6bbe82dd05f0f509fc0cdbe6542af85da09fa96f6ad237eeff1"} Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.728443 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" event={"ID":"604f6908-a2e3-47b3-82af-b2dd6dc5dde2","Type":"ContainerStarted","Data":"3140837eeb606ea8dfa8f9bf9f6bb71a6063569afd047242c219f3d517f39b47"} Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.732349 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" event={"ID":"9056b43f-9cc2-446b-a516-04ba97bf2fd0","Type":"ContainerStarted","Data":"dcb011ed95c8bf61bea1bb7676aadf066a1fecc7c6dc202c956a4e44e2e4a8e0"} Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.732640 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.754219 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-zwr24" podStartSLOduration=2.513276245 podStartE2EDuration="7.754174741s" podCreationTimestamp="2026-03-08 00:35:02 +0000 UTC" firstStartedPulling="2026-03-08 00:35:03.351837013 +0000 UTC m=+724.825981347" lastFinishedPulling="2026-03-08 00:35:08.592735459 +0000 UTC m=+730.066879843" observedRunningTime="2026-03-08 00:35:09.74959821 +0000 UTC m=+731.223742584" watchObservedRunningTime="2026-03-08 00:35:09.754174741 +0000 UTC m=+731.228319105" Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.810317 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-fbthd" podStartSLOduration=2.663974855 podStartE2EDuration="7.810294653s" podCreationTimestamp="2026-03-08 00:35:02 +0000 UTC" firstStartedPulling="2026-03-08 00:35:03.437508428 +0000 UTC m=+724.911652772" lastFinishedPulling="2026-03-08 00:35:08.583828226 +0000 UTC m=+730.057972570" observedRunningTime="2026-03-08 00:35:09.782456988 +0000 UTC m=+731.256601352" watchObservedRunningTime="2026-03-08 00:35:09.810294653 +0000 UTC m=+731.284439007" Mar 08 00:35:09 crc kubenswrapper[4762]: I0308 00:35:09.818489 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podStartSLOduration=2.649768591 podStartE2EDuration="7.818469857s" podCreationTimestamp="2026-03-08 00:35:02 +0000 UTC" firstStartedPulling="2026-03-08 00:35:03.47048249 +0000 UTC m=+724.944626834" lastFinishedPulling="2026-03-08 00:35:08.639183756 +0000 UTC m=+730.113328100" observedRunningTime="2026-03-08 00:35:09.81335544 +0000 UTC m=+731.287499794" watchObservedRunningTime="2026-03-08 00:35:09.818469857 +0000 UTC m=+731.292614221" Mar 08 00:35:12 crc kubenswrapper[4762]: I0308 00:35:12.851201 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:35:12 crc kubenswrapper[4762]: I0308 00:35:12.851545 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:35:12 crc kubenswrapper[4762]: I0308 00:35:12.851591 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:35:12 crc kubenswrapper[4762]: I0308 00:35:12.852154 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:35:12 crc kubenswrapper[4762]: I0308 00:35:12.852213 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4" gracePeriod=600 Mar 08 00:35:13 crc kubenswrapper[4762]: I0308 00:35:13.068952 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 00:35:13 crc kubenswrapper[4762]: I0308 00:35:13.764185 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4" exitCode=0 Mar 08 00:35:13 crc kubenswrapper[4762]: I0308 00:35:13.764247 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4"} Mar 08 00:35:13 crc kubenswrapper[4762]: I0308 00:35:13.764780 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4"} Mar 08 00:35:13 crc kubenswrapper[4762]: I0308 00:35:13.764808 4762 scope.go:117] "RemoveContainer" containerID="76f60a95ba76104d27683e873ff20dc2cc911e060fffacaab8d5230c6f720521" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.643610 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz"] Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.645682 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.648646 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.658803 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz"] Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.685594 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.685675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlnbr\" (UniqueName: \"kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.685709 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.787446 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlnbr\" (UniqueName: \"kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.787529 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.787592 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.788148 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.788210 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.821310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlnbr\" (UniqueName: \"kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr\") pod \"e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:36 crc kubenswrapper[4762]: I0308 00:35:36.964035 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.042375 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls"] Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.043901 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.052962 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls"] Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.091903 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhmnl\" (UniqueName: \"kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.091980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.092008 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.193265 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.193343 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.193477 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhmnl\" (UniqueName: \"kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.194197 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.194346 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.225674 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhmnl\" (UniqueName: \"kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl\") pod \"371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.250663 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz"] Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.364053 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.619007 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls"] Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.948080 4762 generic.go:334] "Generic (PLEG): container finished" podID="c97c6c2e-29ab-4045-912c-289db81216bd" containerID="b8ae6f11fae8f39d302e6847ebf537ae9868d6fd36655df75366d020962635e8" exitCode=0 Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.948167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" event={"ID":"c97c6c2e-29ab-4045-912c-289db81216bd","Type":"ContainerDied","Data":"b8ae6f11fae8f39d302e6847ebf537ae9868d6fd36655df75366d020962635e8"} Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.948201 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" event={"ID":"c97c6c2e-29ab-4045-912c-289db81216bd","Type":"ContainerStarted","Data":"7768dce8a09b721ad1528ef2fe7384d4119e2807e02d1048611aa89efe32d5df"} Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.950915 4762 generic.go:334] "Generic (PLEG): container finished" podID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerID="f2e87767fba985bf1b9daa7e5acc7a7eb89590d93e21212dd16e8a8366db76eb" exitCode=0 Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.950951 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" event={"ID":"4d493483-eff5-4dc1-881c-6dbac66ecffe","Type":"ContainerDied","Data":"f2e87767fba985bf1b9daa7e5acc7a7eb89590d93e21212dd16e8a8366db76eb"} Mar 08 00:35:37 crc kubenswrapper[4762]: I0308 00:35:37.950971 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" event={"ID":"4d493483-eff5-4dc1-881c-6dbac66ecffe","Type":"ContainerStarted","Data":"984c6cdcabd53522099827b5a5b50663a27b73e48d9aed3fb64cf369e6b7752b"} Mar 08 00:35:39 crc kubenswrapper[4762]: I0308 00:35:39.964192 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" event={"ID":"4d493483-eff5-4dc1-881c-6dbac66ecffe","Type":"ContainerDied","Data":"483f1e6c321cb68eff8f386bec22c793a25526667881a9f44b8f33252823cac7"} Mar 08 00:35:39 crc kubenswrapper[4762]: I0308 00:35:39.964083 4762 generic.go:334] "Generic (PLEG): container finished" podID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerID="483f1e6c321cb68eff8f386bec22c793a25526667881a9f44b8f33252823cac7" exitCode=0 Mar 08 00:35:39 crc kubenswrapper[4762]: I0308 00:35:39.966734 4762 generic.go:334] "Generic (PLEG): container finished" podID="c97c6c2e-29ab-4045-912c-289db81216bd" containerID="8ebec37d581cf5ab7649c3669da592d7fbcd07c34a45cdd6978d65ec58fe1e3d" exitCode=0 Mar 08 00:35:39 crc kubenswrapper[4762]: I0308 00:35:39.966782 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" event={"ID":"c97c6c2e-29ab-4045-912c-289db81216bd","Type":"ContainerDied","Data":"8ebec37d581cf5ab7649c3669da592d7fbcd07c34a45cdd6978d65ec58fe1e3d"} Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.376355 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.378157 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.388987 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.440018 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlbx\" (UniqueName: \"kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.445040 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.445364 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.547369 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.547503 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmlbx\" (UniqueName: \"kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.547549 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.548316 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.548391 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.575736 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmlbx\" (UniqueName: \"kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx\") pod \"redhat-operators-6kdk8\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.750548 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.969578 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.974613 4762 generic.go:334] "Generic (PLEG): container finished" podID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerID="164c01ac212e0fe04100e6342b5e27dad7fd1d345f736ae4999900e0a19047cd" exitCode=0 Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.974689 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" event={"ID":"4d493483-eff5-4dc1-881c-6dbac66ecffe","Type":"ContainerDied","Data":"164c01ac212e0fe04100e6342b5e27dad7fd1d345f736ae4999900e0a19047cd"} Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.980785 4762 generic.go:334] "Generic (PLEG): container finished" podID="c97c6c2e-29ab-4045-912c-289db81216bd" containerID="e13c516f44a784d91b59d36ae46346aea715d0a51224343b303294edf25c6b2a" exitCode=0 Mar 08 00:35:40 crc kubenswrapper[4762]: I0308 00:35:40.980830 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" event={"ID":"c97c6c2e-29ab-4045-912c-289db81216bd","Type":"ContainerDied","Data":"e13c516f44a784d91b59d36ae46346aea715d0a51224343b303294edf25c6b2a"} Mar 08 00:35:41 crc kubenswrapper[4762]: I0308 00:35:41.989372 4762 generic.go:334] "Generic (PLEG): container finished" podID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerID="111d5afc5269e37ebef1bf94a90fdbbe89395877da7f57e025b42da3d971f9e3" exitCode=0 Mar 08 00:35:41 crc kubenswrapper[4762]: I0308 00:35:41.989434 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerDied","Data":"111d5afc5269e37ebef1bf94a90fdbbe89395877da7f57e025b42da3d971f9e3"} Mar 08 00:35:41 crc kubenswrapper[4762]: I0308 00:35:41.989770 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerStarted","Data":"29710f8c5d9688bb953100f72af2ea49894f25a5d8abf2b43273953dc67bf13e"} Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.324482 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.327174 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369614 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhmnl\" (UniqueName: \"kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl\") pod \"c97c6c2e-29ab-4045-912c-289db81216bd\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369769 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle\") pod \"4d493483-eff5-4dc1-881c-6dbac66ecffe\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369857 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util\") pod \"4d493483-eff5-4dc1-881c-6dbac66ecffe\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369900 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util\") pod \"c97c6c2e-29ab-4045-912c-289db81216bd\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369928 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlnbr\" (UniqueName: \"kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr\") pod \"4d493483-eff5-4dc1-881c-6dbac66ecffe\" (UID: \"4d493483-eff5-4dc1-881c-6dbac66ecffe\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.369947 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle\") pod \"c97c6c2e-29ab-4045-912c-289db81216bd\" (UID: \"c97c6c2e-29ab-4045-912c-289db81216bd\") " Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.371601 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle" (OuterVolumeSpecName: "bundle") pod "4d493483-eff5-4dc1-881c-6dbac66ecffe" (UID: "4d493483-eff5-4dc1-881c-6dbac66ecffe"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.373293 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle" (OuterVolumeSpecName: "bundle") pod "c97c6c2e-29ab-4045-912c-289db81216bd" (UID: "c97c6c2e-29ab-4045-912c-289db81216bd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.378387 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr" (OuterVolumeSpecName: "kube-api-access-wlnbr") pod "4d493483-eff5-4dc1-881c-6dbac66ecffe" (UID: "4d493483-eff5-4dc1-881c-6dbac66ecffe"). InnerVolumeSpecName "kube-api-access-wlnbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.391203 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util" (OuterVolumeSpecName: "util") pod "c97c6c2e-29ab-4045-912c-289db81216bd" (UID: "c97c6c2e-29ab-4045-912c-289db81216bd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.394979 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl" (OuterVolumeSpecName: "kube-api-access-hhmnl") pod "c97c6c2e-29ab-4045-912c-289db81216bd" (UID: "c97c6c2e-29ab-4045-912c-289db81216bd"). InnerVolumeSpecName "kube-api-access-hhmnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.399924 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util" (OuterVolumeSpecName: "util") pod "4d493483-eff5-4dc1-881c-6dbac66ecffe" (UID: "4d493483-eff5-4dc1-881c-6dbac66ecffe"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470845 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470878 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470888 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlnbr\" (UniqueName: \"kubernetes.io/projected/4d493483-eff5-4dc1-881c-6dbac66ecffe-kube-api-access-wlnbr\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470898 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c97c6c2e-29ab-4045-912c-289db81216bd-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470907 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhmnl\" (UniqueName: \"kubernetes.io/projected/c97c6c2e-29ab-4045-912c-289db81216bd-kube-api-access-hhmnl\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.470915 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4d493483-eff5-4dc1-881c-6dbac66ecffe-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:35:42 crc kubenswrapper[4762]: I0308 00:35:42.998600 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerStarted","Data":"f6d8975de94227eb73527077e5d76bad5a47767a469ccdfd1ef245209bc1335d"} Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.002905 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" event={"ID":"c97c6c2e-29ab-4045-912c-289db81216bd","Type":"ContainerDied","Data":"7768dce8a09b721ad1528ef2fe7384d4119e2807e02d1048611aa89efe32d5df"} Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.002945 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7768dce8a09b721ad1528ef2fe7384d4119e2807e02d1048611aa89efe32d5df" Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.002990 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls" Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.014574 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" event={"ID":"4d493483-eff5-4dc1-881c-6dbac66ecffe","Type":"ContainerDied","Data":"984c6cdcabd53522099827b5a5b50663a27b73e48d9aed3fb64cf369e6b7752b"} Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.014599 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984c6cdcabd53522099827b5a5b50663a27b73e48d9aed3fb64cf369e6b7752b" Mar 08 00:35:43 crc kubenswrapper[4762]: I0308 00:35:43.014655 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz" Mar 08 00:35:44 crc kubenswrapper[4762]: I0308 00:35:44.023665 4762 generic.go:334] "Generic (PLEG): container finished" podID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerID="f6d8975de94227eb73527077e5d76bad5a47767a469ccdfd1ef245209bc1335d" exitCode=0 Mar 08 00:35:44 crc kubenswrapper[4762]: I0308 00:35:44.023991 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerDied","Data":"f6d8975de94227eb73527077e5d76bad5a47767a469ccdfd1ef245209bc1335d"} Mar 08 00:35:45 crc kubenswrapper[4762]: I0308 00:35:45.033925 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerStarted","Data":"79dc01b9cb3bc12c42390c2a528ff68e4e5b456850e6bf508497a0fdce60b2fd"} Mar 08 00:35:45 crc kubenswrapper[4762]: I0308 00:35:45.063600 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6kdk8" podStartSLOduration=2.617387836 podStartE2EDuration="5.063573739s" podCreationTimestamp="2026-03-08 00:35:40 +0000 UTC" firstStartedPulling="2026-03-08 00:35:41.993564845 +0000 UTC m=+763.467709199" lastFinishedPulling="2026-03-08 00:35:44.439750718 +0000 UTC m=+765.913895102" observedRunningTime="2026-03-08 00:35:45.057784934 +0000 UTC m=+766.531929318" watchObservedRunningTime="2026-03-08 00:35:45.063573739 +0000 UTC m=+766.537718123" Mar 08 00:35:50 crc kubenswrapper[4762]: I0308 00:35:50.751062 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:50 crc kubenswrapper[4762]: I0308 00:35:50.751792 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:35:51 crc kubenswrapper[4762]: I0308 00:35:51.788214 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6kdk8" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="registry-server" probeResult="failure" output=< Mar 08 00:35:51 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:35:51 crc kubenswrapper[4762]: > Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668408 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md"] Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668628 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="pull" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668639 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="pull" Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668650 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668656 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668670 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="util" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668676 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="util" Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668684 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="pull" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668689 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="pull" Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668698 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668704 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: E0308 00:35:53.668712 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="util" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668719 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="util" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668827 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97c6c2e-29ab-4045-912c-289db81216bd" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.668836 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d493483-eff5-4dc1-881c-6dbac66ecffe" containerName="extract" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.669394 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.674153 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.674351 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.675394 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.675531 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.675813 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.677283 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-mmbkd" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.692260 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md"] Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.739286 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-apiservice-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.739370 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.739396 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-webhook-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.739554 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrlw5\" (UniqueName: \"kubernetes.io/projected/b242b134-d2b7-4e03-a6c1-cd046de89c3d-kube-api-access-hrlw5\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.739622 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b242b134-d2b7-4e03-a6c1-cd046de89c3d-manager-config\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.840968 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrlw5\" (UniqueName: \"kubernetes.io/projected/b242b134-d2b7-4e03-a6c1-cd046de89c3d-kube-api-access-hrlw5\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.841031 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b242b134-d2b7-4e03-a6c1-cd046de89c3d-manager-config\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.841061 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-apiservice-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.841123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.841147 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-webhook-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.843142 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/b242b134-d2b7-4e03-a6c1-cd046de89c3d-manager-config\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.847336 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-webhook-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.847415 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-apiservice-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.865433 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrlw5\" (UniqueName: \"kubernetes.io/projected/b242b134-d2b7-4e03-a6c1-cd046de89c3d-kube-api-access-hrlw5\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.880525 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b242b134-d2b7-4e03-a6c1-cd046de89c3d-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-55b56f86c9-fm7md\" (UID: \"b242b134-d2b7-4e03-a6c1-cd046de89c3d\") " pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:53 crc kubenswrapper[4762]: I0308 00:35:53.983789 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:35:54 crc kubenswrapper[4762]: I0308 00:35:54.222406 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md"] Mar 08 00:35:55 crc kubenswrapper[4762]: I0308 00:35:55.109182 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" event={"ID":"b242b134-d2b7-4e03-a6c1-cd046de89c3d","Type":"ContainerStarted","Data":"1661739f6b475627ed11b6ea0e693d68e62d97471c9671c9333e4079ca3811b4"} Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.580570 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-k82gj"] Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.581724 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.585289 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.586533 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-z4l5z" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.586717 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.602719 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-k82gj"] Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.718123 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp85k\" (UniqueName: \"kubernetes.io/projected/c0dea8af-c19d-492c-a9ab-e271b9edad28-kube-api-access-pp85k\") pod \"cluster-logging-operator-c769fd969-k82gj\" (UID: \"c0dea8af-c19d-492c-a9ab-e271b9edad28\") " pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.819338 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp85k\" (UniqueName: \"kubernetes.io/projected/c0dea8af-c19d-492c-a9ab-e271b9edad28-kube-api-access-pp85k\") pod \"cluster-logging-operator-c769fd969-k82gj\" (UID: \"c0dea8af-c19d-492c-a9ab-e271b9edad28\") " pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.839324 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp85k\" (UniqueName: \"kubernetes.io/projected/c0dea8af-c19d-492c-a9ab-e271b9edad28-kube-api-access-pp85k\") pod \"cluster-logging-operator-c769fd969-k82gj\" (UID: \"c0dea8af-c19d-492c-a9ab-e271b9edad28\") " pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" Mar 08 00:35:57 crc kubenswrapper[4762]: I0308 00:35:57.905834 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" Mar 08 00:35:59 crc kubenswrapper[4762]: I0308 00:35:59.503497 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-c769fd969-k82gj"] Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.130453 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548836-kfpfl"] Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.135451 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.138033 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.138426 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.140468 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.146892 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548836-kfpfl"] Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.156268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" event={"ID":"b242b134-d2b7-4e03-a6c1-cd046de89c3d","Type":"ContainerStarted","Data":"fb7f58da6b70b80b72db9aa10575196d3436636464686918b3cb6f90ff8e8e84"} Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.157541 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" event={"ID":"c0dea8af-c19d-492c-a9ab-e271b9edad28","Type":"ContainerStarted","Data":"d1a0b956946f2dac1e43e18687ee41a6757d93f6553fb667ba2fba3e5f7b3316"} Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.247558 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsqr\" (UniqueName: \"kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr\") pod \"auto-csr-approver-29548836-kfpfl\" (UID: \"0a16712a-b605-4c8b-9382-8c630322ca2c\") " pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.348428 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsqr\" (UniqueName: \"kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr\") pod \"auto-csr-approver-29548836-kfpfl\" (UID: \"0a16712a-b605-4c8b-9382-8c630322ca2c\") " pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.372636 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsqr\" (UniqueName: \"kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr\") pod \"auto-csr-approver-29548836-kfpfl\" (UID: \"0a16712a-b605-4c8b-9382-8c630322ca2c\") " pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.457142 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.813139 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.856596 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:36:00 crc kubenswrapper[4762]: I0308 00:36:00.998810 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548836-kfpfl"] Mar 08 00:36:01 crc kubenswrapper[4762]: W0308 00:36:01.008030 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a16712a_b605_4c8b_9382_8c630322ca2c.slice/crio-528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a WatchSource:0}: Error finding container 528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a: Status 404 returned error can't find the container with id 528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a Mar 08 00:36:01 crc kubenswrapper[4762]: I0308 00:36:01.173412 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" event={"ID":"0a16712a-b605-4c8b-9382-8c630322ca2c","Type":"ContainerStarted","Data":"528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a"} Mar 08 00:36:03 crc kubenswrapper[4762]: I0308 00:36:03.196782 4762 generic.go:334] "Generic (PLEG): container finished" podID="0a16712a-b605-4c8b-9382-8c630322ca2c" containerID="413c51e7f94dc6fa2448119be31d59a2f43d81236accfd17b5d1a149717b8e37" exitCode=0 Mar 08 00:36:03 crc kubenswrapper[4762]: I0308 00:36:03.196967 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" event={"ID":"0a16712a-b605-4c8b-9382-8c630322ca2c","Type":"ContainerDied","Data":"413c51e7f94dc6fa2448119be31d59a2f43d81236accfd17b5d1a149717b8e37"} Mar 08 00:36:03 crc kubenswrapper[4762]: I0308 00:36:03.773129 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:36:03 crc kubenswrapper[4762]: I0308 00:36:03.773572 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6kdk8" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="registry-server" containerID="cri-o://79dc01b9cb3bc12c42390c2a528ff68e4e5b456850e6bf508497a0fdce60b2fd" gracePeriod=2 Mar 08 00:36:04 crc kubenswrapper[4762]: I0308 00:36:04.214158 4762 generic.go:334] "Generic (PLEG): container finished" podID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerID="79dc01b9cb3bc12c42390c2a528ff68e4e5b456850e6bf508497a0fdce60b2fd" exitCode=0 Mar 08 00:36:04 crc kubenswrapper[4762]: I0308 00:36:04.214313 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerDied","Data":"79dc01b9cb3bc12c42390c2a528ff68e4e5b456850e6bf508497a0fdce60b2fd"} Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.211431 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.255510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" event={"ID":"0a16712a-b605-4c8b-9382-8c630322ca2c","Type":"ContainerDied","Data":"528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a"} Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.255780 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="528a260dd8f157ac3e1f9545f15babed53f3454aa3c8842adcc6f9500ef2749a" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.255854 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548836-kfpfl" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.353405 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdsqr\" (UniqueName: \"kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr\") pod \"0a16712a-b605-4c8b-9382-8c630322ca2c\" (UID: \"0a16712a-b605-4c8b-9382-8c630322ca2c\") " Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.375297 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr" (OuterVolumeSpecName: "kube-api-access-rdsqr") pod "0a16712a-b605-4c8b-9382-8c630322ca2c" (UID: "0a16712a-b605-4c8b-9382-8c630322ca2c"). InnerVolumeSpecName "kube-api-access-rdsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.455750 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdsqr\" (UniqueName: \"kubernetes.io/projected/0a16712a-b605-4c8b-9382-8c630322ca2c-kube-api-access-rdsqr\") on node \"crc\" DevicePath \"\"" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.812950 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.990281 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmlbx\" (UniqueName: \"kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx\") pod \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.991022 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content\") pod \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.991061 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities\") pod \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\" (UID: \"79595c60-ef68-44cd-91b5-b2dc5fd1e77f\") " Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.991857 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities" (OuterVolumeSpecName: "utilities") pod "79595c60-ef68-44cd-91b5-b2dc5fd1e77f" (UID: "79595c60-ef68-44cd-91b5-b2dc5fd1e77f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:36:07 crc kubenswrapper[4762]: I0308 00:36:07.996433 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx" (OuterVolumeSpecName: "kube-api-access-lmlbx") pod "79595c60-ef68-44cd-91b5-b2dc5fd1e77f" (UID: "79595c60-ef68-44cd-91b5-b2dc5fd1e77f"). InnerVolumeSpecName "kube-api-access-lmlbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.092728 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.092785 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmlbx\" (UniqueName: \"kubernetes.io/projected/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-kube-api-access-lmlbx\") on node \"crc\" DevicePath \"\"" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.141487 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79595c60-ef68-44cd-91b5-b2dc5fd1e77f" (UID: "79595c60-ef68-44cd-91b5-b2dc5fd1e77f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.194354 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79595c60-ef68-44cd-91b5-b2dc5fd1e77f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.260969 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548830-h9bv7"] Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.265036 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548830-h9bv7"] Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.266592 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" event={"ID":"b242b134-d2b7-4e03-a6c1-cd046de89c3d","Type":"ContainerStarted","Data":"89c275b5f645271105dce3d8b47c4d48501d2fa278dd8970300dadd1b010dedc"} Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.266837 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.268209 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" event={"ID":"c0dea8af-c19d-492c-a9ab-e271b9edad28","Type":"ContainerStarted","Data":"9af56ebd9aff970d82cca12c6a97ca9b5906d8cd39223a3cac97e4cfe803782d"} Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.268811 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.270523 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6kdk8" event={"ID":"79595c60-ef68-44cd-91b5-b2dc5fd1e77f","Type":"ContainerDied","Data":"29710f8c5d9688bb953100f72af2ea49894f25a5d8abf2b43273953dc67bf13e"} Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.270584 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6kdk8" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.270595 4762 scope.go:117] "RemoveContainer" containerID="79dc01b9cb3bc12c42390c2a528ff68e4e5b456850e6bf508497a0fdce60b2fd" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.313878 4762 scope.go:117] "RemoveContainer" containerID="f6d8975de94227eb73527077e5d76bad5a47767a469ccdfd1ef245209bc1335d" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.316140 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podStartSLOduration=1.707720477 podStartE2EDuration="15.316121692s" podCreationTimestamp="2026-03-08 00:35:53 +0000 UTC" firstStartedPulling="2026-03-08 00:35:54.229551805 +0000 UTC m=+775.703696159" lastFinishedPulling="2026-03-08 00:36:07.83795302 +0000 UTC m=+789.312097374" observedRunningTime="2026-03-08 00:36:08.294015851 +0000 UTC m=+789.768160215" watchObservedRunningTime="2026-03-08 00:36:08.316121692 +0000 UTC m=+789.790266026" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.357176 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-c769fd969-k82gj" podStartSLOduration=3.155480212 podStartE2EDuration="11.357147954s" podCreationTimestamp="2026-03-08 00:35:57 +0000 UTC" firstStartedPulling="2026-03-08 00:35:59.5323427 +0000 UTC m=+781.006487034" lastFinishedPulling="2026-03-08 00:36:07.734010422 +0000 UTC m=+789.208154776" observedRunningTime="2026-03-08 00:36:08.356048413 +0000 UTC m=+789.830192757" watchObservedRunningTime="2026-03-08 00:36:08.357147954 +0000 UTC m=+789.831292288" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.362870 4762 scope.go:117] "RemoveContainer" containerID="111d5afc5269e37ebef1bf94a90fdbbe89395877da7f57e025b42da3d971f9e3" Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.379306 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:36:08 crc kubenswrapper[4762]: I0308 00:36:08.389040 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6kdk8"] Mar 08 00:36:09 crc kubenswrapper[4762]: I0308 00:36:09.269740 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" path="/var/lib/kubelet/pods/79595c60-ef68-44cd-91b5-b2dc5fd1e77f/volumes" Mar 08 00:36:09 crc kubenswrapper[4762]: I0308 00:36:09.270664 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1420bea-fcf8-4463-b9a7-95a518acbe56" path="/var/lib/kubelet/pods/d1420bea-fcf8-4463-b9a7-95a518acbe56/volumes" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.892948 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Mar 08 00:36:13 crc kubenswrapper[4762]: E0308 00:36:13.893713 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="extract-utilities" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.893735 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="extract-utilities" Mar 08 00:36:13 crc kubenswrapper[4762]: E0308 00:36:13.893784 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="extract-content" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.893799 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="extract-content" Mar 08 00:36:13 crc kubenswrapper[4762]: E0308 00:36:13.893831 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="registry-server" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.893847 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="registry-server" Mar 08 00:36:13 crc kubenswrapper[4762]: E0308 00:36:13.893864 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a16712a-b605-4c8b-9382-8c630322ca2c" containerName="oc" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.893877 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a16712a-b605-4c8b-9382-8c630322ca2c" containerName="oc" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.894063 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a16712a-b605-4c8b-9382-8c630322ca2c" containerName="oc" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.894087 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="79595c60-ef68-44cd-91b5-b2dc5fd1e77f" containerName="registry-server" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.894710 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.901650 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.902302 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Mar 08 00:36:13 crc kubenswrapper[4762]: I0308 00:36:13.906195 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.086447 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.086656 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjc2l\" (UniqueName: \"kubernetes.io/projected/6f179310-e86b-4341-a9e7-ee3d48b8644f-kube-api-access-fjc2l\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.187340 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.187885 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjc2l\" (UniqueName: \"kubernetes.io/projected/6f179310-e86b-4341-a9e7-ee3d48b8644f-kube-api-access-fjc2l\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.192433 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.192504 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edebda30b7126f47fe0dc7db14c8047e7b256db4c5d0812d649b3f05a393b81b/globalmount\"" pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.217610 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjc2l\" (UniqueName: \"kubernetes.io/projected/6f179310-e86b-4341-a9e7-ee3d48b8644f-kube-api-access-fjc2l\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.229861 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-74d32c2d-7183-4e92-842d-8e3d1bef02ad\") pod \"minio\" (UID: \"6f179310-e86b-4341-a9e7-ee3d48b8644f\") " pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.270964 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Mar 08 00:36:14 crc kubenswrapper[4762]: I0308 00:36:14.530416 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Mar 08 00:36:15 crc kubenswrapper[4762]: I0308 00:36:15.312746 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6f179310-e86b-4341-a9e7-ee3d48b8644f","Type":"ContainerStarted","Data":"fb1d13ff4408d4b8e3ed49c4251f3166390c4d0ec9f48be609ebdab5daac8c84"} Mar 08 00:36:18 crc kubenswrapper[4762]: I0308 00:36:18.338744 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"6f179310-e86b-4341-a9e7-ee3d48b8644f","Type":"ContainerStarted","Data":"bf895fa53df040bfb9081611dd156749a9b4e1f628c6b6966ed363bb7de6dafa"} Mar 08 00:36:18 crc kubenswrapper[4762]: I0308 00:36:18.358278 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=5.132311148 podStartE2EDuration="8.358248661s" podCreationTimestamp="2026-03-08 00:36:10 +0000 UTC" firstStartedPulling="2026-03-08 00:36:14.548872928 +0000 UTC m=+796.023017272" lastFinishedPulling="2026-03-08 00:36:17.774810431 +0000 UTC m=+799.248954785" observedRunningTime="2026-03-08 00:36:18.354725621 +0000 UTC m=+799.828869955" watchObservedRunningTime="2026-03-08 00:36:18.358248661 +0000 UTC m=+799.832393055" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.681359 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.683598 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.685898 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-zrbpx" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.685925 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.686300 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.686738 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.696124 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.732914 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.847440 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.847498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhkfl\" (UniqueName: \"kubernetes.io/projected/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-kube-api-access-qhkfl\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.847543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.847619 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.847699 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-config\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.889733 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.890797 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.896115 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.898117 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.898163 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.915746 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.916590 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.919171 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.919395 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.928440 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.951409 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.951456 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhkfl\" (UniqueName: \"kubernetes.io/projected/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-kube-api-access-qhkfl\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.951492 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.951550 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.951600 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-config\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.952498 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.952823 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-config\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.959271 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-http\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.960165 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb"] Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.974570 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:23 crc kubenswrapper[4762]: I0308 00:36:23.975600 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhkfl\" (UniqueName: \"kubernetes.io/projected/7d1d5c16-4b49-4abf-8b13-0df0fda43b6a-kube-api-access-qhkfl\") pod \"logging-loki-distributor-5d5548c9f5-8fxrr\" (UID: \"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a\") " pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.013957 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.045676 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-vq8xm"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.046658 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.050445 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.050634 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.050741 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.050894 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.050989 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052635 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052695 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-config\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052737 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052784 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052805 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052826 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txlwd\" (UniqueName: \"kubernetes.io/projected/6e603ecb-b9b1-4fba-af81-9da07c682395-kube-api-access-txlwd\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052843 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052863 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnjf\" (UniqueName: \"kubernetes.io/projected/6fd90908-2008-4941-ba65-62557823e8a0-kube-api-access-7dnjf\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052881 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052897 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-config\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.052923 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.057539 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-vq8xm"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.066041 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-lmbn4"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.067068 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.070204 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-5s94b" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.078135 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-lmbn4"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.154466 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.154682 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.154848 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txlwd\" (UniqueName: \"kubernetes.io/projected/6e603ecb-b9b1-4fba-af81-9da07c682395-kube-api-access-txlwd\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.154934 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155002 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155076 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dnjf\" (UniqueName: \"kubernetes.io/projected/6fd90908-2008-4941-ba65-62557823e8a0-kube-api-access-7dnjf\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155143 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155215 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-config\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155285 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tenants\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155349 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155411 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156462 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tls-secret\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156602 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156705 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-config\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156809 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.155600 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156387 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e603ecb-b9b1-4fba-af81-9da07c682395-config\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.156947 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157275 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157404 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shld7\" (UniqueName: \"kubernetes.io/projected/3be01762-1f06-4534-8426-ab3b41e8e8d8-kube-api-access-shld7\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157502 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-rbac\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157458 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-ca-bundle\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157905 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fd90908-2008-4941-ba65-62557823e8a0-config\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157917 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tls-secret\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.157989 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tenants\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158032 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158058 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158084 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158122 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-rbac\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158249 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dms6x\" (UniqueName: \"kubernetes.io/projected/1efe4203-538b-41b7-9e52-832aeceaac3b-kube-api-access-dms6x\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.158271 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.159369 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-grpc\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.159445 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.160015 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-querier-http\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.161173 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/6fd90908-2008-4941-ba65-62557823e8a0-logging-loki-s3\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.161883 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/6e603ecb-b9b1-4fba-af81-9da07c682395-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.170077 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txlwd\" (UniqueName: \"kubernetes.io/projected/6e603ecb-b9b1-4fba-af81-9da07c682395-kube-api-access-txlwd\") pod \"logging-loki-query-frontend-6d6859c548-phxp4\" (UID: \"6e603ecb-b9b1-4fba-af81-9da07c682395\") " pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.179603 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dnjf\" (UniqueName: \"kubernetes.io/projected/6fd90908-2008-4941-ba65-62557823e8a0-kube-api-access-7dnjf\") pod \"logging-loki-querier-76bf7b6d45-nsgkb\" (UID: \"6fd90908-2008-4941-ba65-62557823e8a0\") " pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.211743 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.239375 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.262900 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.262939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.262970 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-rbac\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263000 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dms6x\" (UniqueName: \"kubernetes.io/projected/1efe4203-538b-41b7-9e52-832aeceaac3b-kube-api-access-dms6x\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263030 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263105 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263162 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tenants\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263186 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263201 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tls-secret\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263235 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263256 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263277 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263294 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-rbac\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263313 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shld7\" (UniqueName: \"kubernetes.io/projected/3be01762-1f06-4534-8426-ab3b41e8e8d8-kube-api-access-shld7\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263345 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tls-secret\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.263363 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tenants\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.264823 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.265302 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.265996 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.266046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.267451 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-rbac\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.268168 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-lokistack-gateway\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.268338 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tenants\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.268849 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tenants\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.268991 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.269214 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1efe4203-538b-41b7-9e52-832aeceaac3b-rbac\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.272296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.272298 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.272672 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/3be01762-1f06-4534-8426-ab3b41e8e8d8-tls-secret\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.272678 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1efe4203-538b-41b7-9e52-832aeceaac3b-tls-secret\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.278303 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dms6x\" (UniqueName: \"kubernetes.io/projected/1efe4203-538b-41b7-9e52-832aeceaac3b-kube-api-access-dms6x\") pod \"logging-loki-gateway-58595d78f8-vq8xm\" (UID: \"1efe4203-538b-41b7-9e52-832aeceaac3b\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.284622 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shld7\" (UniqueName: \"kubernetes.io/projected/3be01762-1f06-4534-8426-ab3b41e8e8d8-kube-api-access-shld7\") pod \"logging-loki-gateway-58595d78f8-lmbn4\" (UID: \"3be01762-1f06-4534-8426-ab3b41e8e8d8\") " pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.417975 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.435806 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.519295 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.560989 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.753987 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-vq8xm"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.792729 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4"] Mar 08 00:36:24 crc kubenswrapper[4762]: W0308 00:36:24.798946 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e603ecb_b9b1_4fba_af81_9da07c682395.slice/crio-3c15bf72609014297d855e1ee6efe8e7da24bf65e366c00477b1267e972da962 WatchSource:0}: Error finding container 3c15bf72609014297d855e1ee6efe8e7da24bf65e366c00477b1267e972da962: Status 404 returned error can't find the container with id 3c15bf72609014297d855e1ee6efe8e7da24bf65e366c00477b1267e972da962 Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.878383 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.879121 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.881676 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.893027 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.898753 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.928915 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.931436 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.933451 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.933981 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.947853 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.969175 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.969940 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.974910 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.975159 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Mar 08 00:36:24 crc kubenswrapper[4762]: I0308 00:36:24.994449 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.044500 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-58595d78f8-lmbn4"] Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.078383 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.079059 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlkr\" (UniqueName: \"kubernetes.io/projected/306f3a2d-d090-4aad-b84c-05078f5f8be5-kube-api-access-ddlkr\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.079189 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.079304 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.079416 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082157 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-config\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082256 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082353 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082438 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-config\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082624 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq8j2\" (UniqueName: \"kubernetes.io/projected/4c30f467-b939-4c68-91f0-707c6893e6ff-kube-api-access-mq8j2\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082733 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082850 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgw5s\" (UniqueName: \"kubernetes.io/projected/748fb55a-dbe2-4b8b-9e08-577495a258a4-kube-api-access-sgw5s\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.082939 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083141 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083245 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083348 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-config\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083439 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083556 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083636 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083718 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083814 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.083921 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184712 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlkr\" (UniqueName: \"kubernetes.io/projected/306f3a2d-d090-4aad-b84c-05078f5f8be5-kube-api-access-ddlkr\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184774 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184798 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184819 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184839 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-config\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184859 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184882 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184898 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-config\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq8j2\" (UniqueName: \"kubernetes.io/projected/4c30f467-b939-4c68-91f0-707c6893e6ff-kube-api-access-mq8j2\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184960 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.184981 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgw5s\" (UniqueName: \"kubernetes.io/projected/748fb55a-dbe2-4b8b-9e08-577495a258a4-kube-api-access-sgw5s\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185001 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185023 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185038 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185055 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-config\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185072 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185100 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185121 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185141 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185160 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185179 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185200 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.185967 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-config\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.186749 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.186772 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/748fb55a-dbe2-4b8b-9e08-577495a258a4-config\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.191939 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.192445 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.192477 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.194018 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.194118 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.194127 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/306f3a2d-d090-4aad-b84c-05078f5f8be5-config\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.194143 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d11fd4968a3776a1dbe38508127b43b1f00828711d8b56715644b5e942195dfb/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.195484 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.195524 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.196816 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.198086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/748fb55a-dbe2-4b8b-9e08-577495a258a4-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.199402 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/306f3a2d-d090-4aad-b84c-05078f5f8be5-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.199842 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202090 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202115 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9b602d0fc6b290815c0b7f7ce5a9df975e4107c6fc3f9a91ed3c868ad9bf0f4a/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202150 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202179 4762 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202210 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e7f4bf566683165e8fbcca2c13e033a892f689cc8a916cdadf0101d6f5ddd1e2/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202235 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgw5s\" (UniqueName: \"kubernetes.io/projected/748fb55a-dbe2-4b8b-9e08-577495a258a4-kube-api-access-sgw5s\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202179 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1e6f53138db0ec39394aa53bf5b458a7d1cd5b04b01fd22fba4ad9f26c91b114/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.202865 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/4c30f467-b939-4c68-91f0-707c6893e6ff-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.215561 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq8j2\" (UniqueName: \"kubernetes.io/projected/4c30f467-b939-4c68-91f0-707c6893e6ff-kube-api-access-mq8j2\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.216212 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlkr\" (UniqueName: \"kubernetes.io/projected/306f3a2d-d090-4aad-b84c-05078f5f8be5-kube-api-access-ddlkr\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.239850 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fe0033d2-2dc8-4f66-ad82-b009031f585a\") pod \"logging-loki-compactor-0\" (UID: \"4c30f467-b939-4c68-91f0-707c6893e6ff\") " pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.240342 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-51bb0221-acb5-43f3-beab-ba1daf8b47d8\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.242815 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d085bf17-b5d3-4ea6-8d5e-ab52b6050983\") pod \"logging-loki-index-gateway-0\" (UID: \"748fb55a-dbe2-4b8b-9e08-577495a258a4\") " pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.245857 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d10b38-9144-4c01-8031-65d46511a3d6\") pod \"logging-loki-ingester-0\" (UID: \"306f3a2d-d090-4aad-b84c-05078f5f8be5\") " pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.341886 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.380248 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.395401 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" event={"ID":"1efe4203-538b-41b7-9e52-832aeceaac3b","Type":"ContainerStarted","Data":"470a3122a49def72722327278684af38e9ab4d75b04af3e8620b791e3ae8cd4a"} Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.397168 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" event={"ID":"6e603ecb-b9b1-4fba-af81-9da07c682395","Type":"ContainerStarted","Data":"3c15bf72609014297d855e1ee6efe8e7da24bf65e366c00477b1267e972da962"} Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.398692 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" event={"ID":"6fd90908-2008-4941-ba65-62557823e8a0","Type":"ContainerStarted","Data":"bfe017ca4860da33ef30ca2347f83ef10a8f74d9aba5bd46445d8c6e1504dbed"} Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.399976 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" event={"ID":"3be01762-1f06-4534-8426-ab3b41e8e8d8","Type":"ContainerStarted","Data":"b69e44f0aaa22c3473d80c1699e66991f69e6d72d9c95a69b5a71ff4800c6606"} Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.401190 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" event={"ID":"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a","Type":"ContainerStarted","Data":"7ea137dcf7539c63a16c2ad08c7babf75258bfa061c7f31517ff969cc8e9656c"} Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.494293 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.620530 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Mar 08 00:36:25 crc kubenswrapper[4762]: W0308 00:36:25.631747 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c30f467_b939_4c68_91f0_707c6893e6ff.slice/crio-113080a5d1d4324bfca19097f853cd635bfda22b699887951820f063c3d906c8 WatchSource:0}: Error finding container 113080a5d1d4324bfca19097f853cd635bfda22b699887951820f063c3d906c8: Status 404 returned error can't find the container with id 113080a5d1d4324bfca19097f853cd635bfda22b699887951820f063c3d906c8 Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.693127 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Mar 08 00:36:25 crc kubenswrapper[4762]: W0308 00:36:25.701295 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod748fb55a_dbe2_4b8b_9e08_577495a258a4.slice/crio-e2a9098a3f06562acf34489f78df1a96c1f95bf740bebfe96fe297b193711b64 WatchSource:0}: Error finding container e2a9098a3f06562acf34489f78df1a96c1f95bf740bebfe96fe297b193711b64: Status 404 returned error can't find the container with id e2a9098a3f06562acf34489f78df1a96c1f95bf740bebfe96fe297b193711b64 Mar 08 00:36:25 crc kubenswrapper[4762]: I0308 00:36:25.794352 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Mar 08 00:36:26 crc kubenswrapper[4762]: I0308 00:36:26.410546 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4c30f467-b939-4c68-91f0-707c6893e6ff","Type":"ContainerStarted","Data":"113080a5d1d4324bfca19097f853cd635bfda22b699887951820f063c3d906c8"} Mar 08 00:36:26 crc kubenswrapper[4762]: I0308 00:36:26.411969 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"748fb55a-dbe2-4b8b-9e08-577495a258a4","Type":"ContainerStarted","Data":"e2a9098a3f06562acf34489f78df1a96c1f95bf740bebfe96fe297b193711b64"} Mar 08 00:36:26 crc kubenswrapper[4762]: I0308 00:36:26.413268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"306f3a2d-d090-4aad-b84c-05078f5f8be5","Type":"ContainerStarted","Data":"ca8d056e3ac8cec578b7a8bb1669b6911f339bd0ae0bf13eb3db4dd25e61cd0b"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.431339 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" event={"ID":"1efe4203-538b-41b7-9e52-832aeceaac3b","Type":"ContainerStarted","Data":"8fa8b8d0f4f83008df53ff8bdc1f3f2275b1e9ce1fa97291596e622e8b3df9c2"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.435032 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" event={"ID":"6e603ecb-b9b1-4fba-af81-9da07c682395","Type":"ContainerStarted","Data":"6d1434ae22761a848bc86c02d13a5224ac088abff019e7c895dfbcbd6660ebd3"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.435564 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.436744 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"4c30f467-b939-4c68-91f0-707c6893e6ff","Type":"ContainerStarted","Data":"76303b08bc04bed6d848ae802b9f47bd4ff4d572cc75c7b0e7e6af284545a67c"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.436887 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.439028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" event={"ID":"6fd90908-2008-4941-ba65-62557823e8a0","Type":"ContainerStarted","Data":"cd4ad25f91f23171b14a9f4af9540bda0df22ef123ef902894ee44b9f576b6f0"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.439279 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.440878 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"306f3a2d-d090-4aad-b84c-05078f5f8be5","Type":"ContainerStarted","Data":"54049499d62116a0d36a83e2022c53f01e4fceb1d6240b54a448de458f2dec66"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.441030 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.442512 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" event={"ID":"3be01762-1f06-4534-8426-ab3b41e8e8d8","Type":"ContainerStarted","Data":"0bb761b7ba822d5d63bbfc8b412ec3db9d8abaeb0d134e97c1ecec957d4c1935"} Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.492342 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podStartSLOduration=2.474936573 podStartE2EDuration="5.492315737s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:24.801116943 +0000 UTC m=+806.275261287" lastFinishedPulling="2026-03-08 00:36:27.818496107 +0000 UTC m=+809.292640451" observedRunningTime="2026-03-08 00:36:28.468621538 +0000 UTC m=+809.942765902" watchObservedRunningTime="2026-03-08 00:36:28.492315737 +0000 UTC m=+809.966460081" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.515004 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.5006408479999997 podStartE2EDuration="5.514990464s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:25.806477244 +0000 UTC m=+807.280621598" lastFinishedPulling="2026-03-08 00:36:27.82082687 +0000 UTC m=+809.294971214" observedRunningTime="2026-03-08 00:36:28.496444898 +0000 UTC m=+809.970589252" watchObservedRunningTime="2026-03-08 00:36:28.514990464 +0000 UTC m=+809.989134808" Mar 08 00:36:28 crc kubenswrapper[4762]: I0308 00:36:28.515556 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podStartSLOduration=2.372411278 podStartE2EDuration="5.515552782s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:24.676920763 +0000 UTC m=+806.151065107" lastFinishedPulling="2026-03-08 00:36:27.820062237 +0000 UTC m=+809.294206611" observedRunningTime="2026-03-08 00:36:28.511497405 +0000 UTC m=+809.985641749" watchObservedRunningTime="2026-03-08 00:36:28.515552782 +0000 UTC m=+809.989697126" Mar 08 00:36:29 crc kubenswrapper[4762]: I0308 00:36:29.307860 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=4.123399183 podStartE2EDuration="6.307836491s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:25.635019329 +0000 UTC m=+807.109163713" lastFinishedPulling="2026-03-08 00:36:27.819456677 +0000 UTC m=+809.293601021" observedRunningTime="2026-03-08 00:36:28.528447811 +0000 UTC m=+810.002592175" watchObservedRunningTime="2026-03-08 00:36:29.307836491 +0000 UTC m=+810.781980835" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.459795 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" event={"ID":"3be01762-1f06-4534-8426-ab3b41e8e8d8","Type":"ContainerStarted","Data":"fc1fc80a0d511d9a049d089b96247539462799cbd9abc33b4eb96208f5c52ea6"} Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.460205 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.462421 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" event={"ID":"1efe4203-538b-41b7-9e52-832aeceaac3b","Type":"ContainerStarted","Data":"925893af04ee6cb5bbf4e9d81569a3b4779570379532707c902cd354e82c4238"} Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.462695 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.474104 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.475289 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.502844 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podStartSLOduration=1.47516032 podStartE2EDuration="6.502804982s" podCreationTimestamp="2026-03-08 00:36:24 +0000 UTC" firstStartedPulling="2026-03-08 00:36:25.055986797 +0000 UTC m=+806.530131141" lastFinishedPulling="2026-03-08 00:36:30.083631459 +0000 UTC m=+811.557775803" observedRunningTime="2026-03-08 00:36:30.491288757 +0000 UTC m=+811.965433141" watchObservedRunningTime="2026-03-08 00:36:30.502804982 +0000 UTC m=+811.976949376" Mar 08 00:36:30 crc kubenswrapper[4762]: I0308 00:36:30.523859 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podStartSLOduration=2.207563994 podStartE2EDuration="7.523838758s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:24.761300463 +0000 UTC m=+806.235444807" lastFinishedPulling="2026-03-08 00:36:30.077575227 +0000 UTC m=+811.551719571" observedRunningTime="2026-03-08 00:36:30.518001433 +0000 UTC m=+811.992145807" watchObservedRunningTime="2026-03-08 00:36:30.523838758 +0000 UTC m=+811.997983112" Mar 08 00:36:31 crc kubenswrapper[4762]: I0308 00:36:31.470508 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:31 crc kubenswrapper[4762]: I0308 00:36:31.471986 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:31 crc kubenswrapper[4762]: I0308 00:36:31.486883 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" Mar 08 00:36:31 crc kubenswrapper[4762]: I0308 00:36:31.493984 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" Mar 08 00:36:32 crc kubenswrapper[4762]: I0308 00:36:32.480685 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" event={"ID":"7d1d5c16-4b49-4abf-8b13-0df0fda43b6a","Type":"ContainerStarted","Data":"d18416fa445553fffb18fa91d2f45b209e41be238c52a1ae12b3176cd549ae9c"} Mar 08 00:36:32 crc kubenswrapper[4762]: I0308 00:36:32.481407 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:32 crc kubenswrapper[4762]: I0308 00:36:32.510647 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podStartSLOduration=2.027000551 podStartE2EDuration="9.510616112s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:24.575273547 +0000 UTC m=+806.049417881" lastFinishedPulling="2026-03-08 00:36:32.058889068 +0000 UTC m=+813.533033442" observedRunningTime="2026-03-08 00:36:32.501189264 +0000 UTC m=+813.975333628" watchObservedRunningTime="2026-03-08 00:36:32.510616112 +0000 UTC m=+813.984760486" Mar 08 00:36:34 crc kubenswrapper[4762]: I0308 00:36:34.502616 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"748fb55a-dbe2-4b8b-9e08-577495a258a4","Type":"ContainerStarted","Data":"3ed8e2da8d9dd2214e49b4d8114236876780709854b89e336b6c0b43f75e94f6"} Mar 08 00:36:34 crc kubenswrapper[4762]: I0308 00:36:34.503119 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:34 crc kubenswrapper[4762]: I0308 00:36:34.543360 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=3.679474267 podStartE2EDuration="11.543323539s" podCreationTimestamp="2026-03-08 00:36:23 +0000 UTC" firstStartedPulling="2026-03-08 00:36:25.7042726 +0000 UTC m=+807.178416944" lastFinishedPulling="2026-03-08 00:36:33.568121842 +0000 UTC m=+815.042266216" observedRunningTime="2026-03-08 00:36:34.532743405 +0000 UTC m=+816.006887789" watchObservedRunningTime="2026-03-08 00:36:34.543323539 +0000 UTC m=+816.017467923" Mar 08 00:36:44 crc kubenswrapper[4762]: I0308 00:36:44.220938 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 00:36:44 crc kubenswrapper[4762]: I0308 00:36:44.255317 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 00:36:45 crc kubenswrapper[4762]: I0308 00:36:45.371203 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 08 00:36:45 crc kubenswrapper[4762]: I0308 00:36:45.499339 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 08 00:36:45 crc kubenswrapper[4762]: I0308 00:36:45.499393 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 08 00:36:54 crc kubenswrapper[4762]: I0308 00:36:54.021820 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 00:36:55 crc kubenswrapper[4762]: I0308 00:36:55.391183 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 00:36:55 crc kubenswrapper[4762]: I0308 00:36:55.503990 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Mar 08 00:36:55 crc kubenswrapper[4762]: I0308 00:36:55.504057 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 08 00:37:00 crc kubenswrapper[4762]: I0308 00:37:00.093712 4762 scope.go:117] "RemoveContainer" containerID="3b2632051e282cd1b3c7666af9869675a8e92a74f0d236b62407b82d66afbea4" Mar 08 00:37:05 crc kubenswrapper[4762]: I0308 00:37:05.503510 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 08 00:37:05 crc kubenswrapper[4762]: I0308 00:37:05.504282 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 08 00:37:15 crc kubenswrapper[4762]: I0308 00:37:15.501166 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Mar 08 00:37:15 crc kubenswrapper[4762]: I0308 00:37:15.502052 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 08 00:37:25 crc kubenswrapper[4762]: I0308 00:37:25.502799 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 08 00:37:42 crc kubenswrapper[4762]: I0308 00:37:42.852335 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:37:42 crc kubenswrapper[4762]: I0308 00:37:42.853301 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.124439 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-4jtbw"] Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.128302 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.133809 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.133882 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.134375 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.134737 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-rgbsr" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.135118 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.145900 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.149405 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-4jtbw"] Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.202888 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-4jtbw"] Mar 08 00:37:43 crc kubenswrapper[4762]: E0308 00:37:43.203380 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-zcjlh metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-4jtbw" podUID="fb9ca221-bdcc-496a-902a-389f3a68db06" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.261871 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.261948 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262215 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262348 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262449 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262485 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcjlh\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262533 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262642 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262694 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.262865 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.263080 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364471 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364520 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcjlh\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364545 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364567 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364594 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364632 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364662 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364697 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364746 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.364810 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.365645 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.365717 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.366026 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.366093 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.366631 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.376487 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.377011 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.378233 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.387891 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.388780 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:43 crc kubenswrapper[4762]: I0308 00:37:43.393942 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcjlh\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh\") pod \"collector-4jtbw\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " pod="openshift-logging/collector-4jtbw" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.102595 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4jtbw" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.118646 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4jtbw" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.280691 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcjlh\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.280837 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.280916 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.280967 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281052 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281199 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281300 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281368 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281425 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281456 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir" (OuterVolumeSpecName: "datadir") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281486 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.281619 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp\") pod \"fb9ca221-bdcc-496a-902a-389f3a68db06\" (UID: \"fb9ca221-bdcc-496a-902a-389f3a68db06\") " Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.282465 4762 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/fb9ca221-bdcc-496a-902a-389f3a68db06-datadir\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.282746 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.282984 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.283333 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config" (OuterVolumeSpecName: "config") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.283726 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.288523 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.289041 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh" (OuterVolumeSpecName: "kube-api-access-zcjlh") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "kube-api-access-zcjlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.289030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token" (OuterVolumeSpecName: "sa-token") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.291738 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp" (OuterVolumeSpecName: "tmp") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.293976 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token" (OuterVolumeSpecName: "collector-token") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.299982 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics" (OuterVolumeSpecName: "metrics") pod "fb9ca221-bdcc-496a-902a-389f3a68db06" (UID: "fb9ca221-bdcc-496a-902a-389f3a68db06"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384750 4762 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384840 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384870 4762 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384898 4762 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-collector-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384920 4762 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb9ca221-bdcc-496a-902a-389f3a68db06-tmp\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384944 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcjlh\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-kube-api-access-zcjlh\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384968 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.384992 4762 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/fb9ca221-bdcc-496a-902a-389f3a68db06-metrics\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.385015 4762 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/fb9ca221-bdcc-496a-902a-389f3a68db06-entrypoint\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:44 crc kubenswrapper[4762]: I0308 00:37:44.385038 4762 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/fb9ca221-bdcc-496a-902a-389f3a68db06-sa-token\") on node \"crc\" DevicePath \"\"" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.110901 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-4jtbw" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.190863 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-4jtbw"] Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.202220 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-4jtbw"] Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.209562 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-x5zk2"] Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.211233 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.216110 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.217878 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.218473 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.218856 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.219225 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-rgbsr" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.227639 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-x5zk2"] Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.228998 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.296191 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9ca221-bdcc-496a-902a-389f3a68db06" path="/var/lib/kubelet/pods/fb9ca221-bdcc-496a-902a-389f3a68db06/volumes" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401451 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-syslog-receiver\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401529 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401558 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8dd9e504-c718-4778-972a-da408fd6c2fe-datadir\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401578 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config-openshift-service-cacrt\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401601 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-metrics\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401621 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzx9\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-kube-api-access-vxzx9\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401862 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-entrypoint\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401932 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dd9e504-c718-4778-972a-da408fd6c2fe-tmp\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.401961 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-trusted-ca\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.402018 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-sa-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.503991 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504065 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config-openshift-service-cacrt\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504099 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8dd9e504-c718-4778-972a-da408fd6c2fe-datadir\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504134 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-metrics\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504164 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzx9\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-kube-api-access-vxzx9\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504255 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-entrypoint\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504285 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/8dd9e504-c718-4778-972a-da408fd6c2fe-datadir\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504316 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dd9e504-c718-4778-972a-da408fd6c2fe-tmp\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504384 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-trusted-ca\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504478 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-sa-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.504549 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-syslog-receiver\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.506966 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.508578 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-entrypoint\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.510109 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-trusted-ca\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.512982 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8dd9e504-c718-4778-972a-da408fd6c2fe-tmp\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.513334 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/8dd9e504-c718-4778-972a-da408fd6c2fe-config-openshift-service-cacrt\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.513329 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-syslog-receiver\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.515981 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-collector-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.518564 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/8dd9e504-c718-4778-972a-da408fd6c2fe-metrics\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.522536 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzx9\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-kube-api-access-vxzx9\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.527253 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/8dd9e504-c718-4778-972a-da408fd6c2fe-sa-token\") pod \"collector-x5zk2\" (UID: \"8dd9e504-c718-4778-972a-da408fd6c2fe\") " pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.537817 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-x5zk2" Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.825725 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-x5zk2"] Mar 08 00:37:45 crc kubenswrapper[4762]: I0308 00:37:45.843552 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:37:46 crc kubenswrapper[4762]: I0308 00:37:46.122824 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-x5zk2" event={"ID":"8dd9e504-c718-4778-972a-da408fd6c2fe","Type":"ContainerStarted","Data":"d0e6e48bbbe1fc48f101f56ddd5ee8f90eb89eaf48246dd6ce0c0ca704f1004d"} Mar 08 00:37:53 crc kubenswrapper[4762]: I0308 00:37:53.184481 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-x5zk2" event={"ID":"8dd9e504-c718-4778-972a-da408fd6c2fe","Type":"ContainerStarted","Data":"8975690facf0aac5d9f4b42206d20d80490563ecde209a559fd296b277e70ec8"} Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.456668 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-x5zk2" podStartSLOduration=5.845849749 podStartE2EDuration="12.456646756s" podCreationTimestamp="2026-03-08 00:37:45 +0000 UTC" firstStartedPulling="2026-03-08 00:37:45.843157483 +0000 UTC m=+887.317301837" lastFinishedPulling="2026-03-08 00:37:52.45395447 +0000 UTC m=+893.928098844" observedRunningTime="2026-03-08 00:37:53.226357465 +0000 UTC m=+894.700501849" watchObservedRunningTime="2026-03-08 00:37:57.456646756 +0000 UTC m=+898.930791100" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.464044 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.466270 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.476598 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.519993 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.520129 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.520216 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7pc\" (UniqueName: \"kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.622455 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q7pc\" (UniqueName: \"kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.622557 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.622597 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.623121 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.623296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.649237 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q7pc\" (UniqueName: \"kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc\") pod \"redhat-marketplace-2sh8b\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:57 crc kubenswrapper[4762]: I0308 00:37:57.784090 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:37:58 crc kubenswrapper[4762]: I0308 00:37:58.280855 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:37:58 crc kubenswrapper[4762]: W0308 00:37:58.297117 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod969fd5d1_3dca_415e_ad73_ee47d5d647f5.slice/crio-1d01a68dcba688b6c81afea3aa9c7aa8cb2493c719300bec4096871b37649537 WatchSource:0}: Error finding container 1d01a68dcba688b6c81afea3aa9c7aa8cb2493c719300bec4096871b37649537: Status 404 returned error can't find the container with id 1d01a68dcba688b6c81afea3aa9c7aa8cb2493c719300bec4096871b37649537 Mar 08 00:37:59 crc kubenswrapper[4762]: I0308 00:37:59.235258 4762 generic.go:334] "Generic (PLEG): container finished" podID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerID="ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761" exitCode=0 Mar 08 00:37:59 crc kubenswrapper[4762]: I0308 00:37:59.235628 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerDied","Data":"ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761"} Mar 08 00:37:59 crc kubenswrapper[4762]: I0308 00:37:59.235666 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerStarted","Data":"1d01a68dcba688b6c81afea3aa9c7aa8cb2493c719300bec4096871b37649537"} Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.152833 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548838-b4q6k"] Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.154573 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.159808 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ln45\" (UniqueName: \"kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45\") pod \"auto-csr-approver-29548838-b4q6k\" (UID: \"28c72edf-8a55-4f3a-8c65-da6a0b0531c9\") " pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.160552 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548838-b4q6k"] Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.163856 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.164053 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.164168 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.242935 4762 generic.go:334] "Generic (PLEG): container finished" podID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerID="ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c" exitCode=0 Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.242985 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerDied","Data":"ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c"} Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.261344 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ln45\" (UniqueName: \"kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45\") pod \"auto-csr-approver-29548838-b4q6k\" (UID: \"28c72edf-8a55-4f3a-8c65-da6a0b0531c9\") " pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.281016 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ln45\" (UniqueName: \"kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45\") pod \"auto-csr-approver-29548838-b4q6k\" (UID: \"28c72edf-8a55-4f3a-8c65-da6a0b0531c9\") " pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.476827 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:00 crc kubenswrapper[4762]: I0308 00:38:00.815228 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548838-b4q6k"] Mar 08 00:38:00 crc kubenswrapper[4762]: W0308 00:38:00.823000 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28c72edf_8a55_4f3a_8c65_da6a0b0531c9.slice/crio-bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8 WatchSource:0}: Error finding container bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8: Status 404 returned error can't find the container with id bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8 Mar 08 00:38:01 crc kubenswrapper[4762]: I0308 00:38:01.255945 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" event={"ID":"28c72edf-8a55-4f3a-8c65-da6a0b0531c9","Type":"ContainerStarted","Data":"bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8"} Mar 08 00:38:01 crc kubenswrapper[4762]: I0308 00:38:01.261883 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerStarted","Data":"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f"} Mar 08 00:38:01 crc kubenswrapper[4762]: I0308 00:38:01.297887 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2sh8b" podStartSLOduration=2.866165563 podStartE2EDuration="4.297854745s" podCreationTimestamp="2026-03-08 00:37:57 +0000 UTC" firstStartedPulling="2026-03-08 00:37:59.237835625 +0000 UTC m=+900.711980019" lastFinishedPulling="2026-03-08 00:38:00.669524857 +0000 UTC m=+902.143669201" observedRunningTime="2026-03-08 00:38:01.286035853 +0000 UTC m=+902.760180267" watchObservedRunningTime="2026-03-08 00:38:01.297854745 +0000 UTC m=+902.771999139" Mar 08 00:38:02 crc kubenswrapper[4762]: I0308 00:38:02.272942 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" event={"ID":"28c72edf-8a55-4f3a-8c65-da6a0b0531c9","Type":"ContainerStarted","Data":"9c2425a248af30a42dfb19406e5edbbc08ad5554d9faa03d93aa124095977485"} Mar 08 00:38:02 crc kubenswrapper[4762]: I0308 00:38:02.296970 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" podStartSLOduration=1.533135771 podStartE2EDuration="2.296942881s" podCreationTimestamp="2026-03-08 00:38:00 +0000 UTC" firstStartedPulling="2026-03-08 00:38:00.826714514 +0000 UTC m=+902.300858858" lastFinishedPulling="2026-03-08 00:38:01.590521604 +0000 UTC m=+903.064665968" observedRunningTime="2026-03-08 00:38:02.295023883 +0000 UTC m=+903.769168267" watchObservedRunningTime="2026-03-08 00:38:02.296942881 +0000 UTC m=+903.771087265" Mar 08 00:38:03 crc kubenswrapper[4762]: I0308 00:38:03.284285 4762 generic.go:334] "Generic (PLEG): container finished" podID="28c72edf-8a55-4f3a-8c65-da6a0b0531c9" containerID="9c2425a248af30a42dfb19406e5edbbc08ad5554d9faa03d93aa124095977485" exitCode=0 Mar 08 00:38:03 crc kubenswrapper[4762]: I0308 00:38:03.284385 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" event={"ID":"28c72edf-8a55-4f3a-8c65-da6a0b0531c9","Type":"ContainerDied","Data":"9c2425a248af30a42dfb19406e5edbbc08ad5554d9faa03d93aa124095977485"} Mar 08 00:38:04 crc kubenswrapper[4762]: I0308 00:38:04.685436 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:04 crc kubenswrapper[4762]: I0308 00:38:04.834793 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ln45\" (UniqueName: \"kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45\") pod \"28c72edf-8a55-4f3a-8c65-da6a0b0531c9\" (UID: \"28c72edf-8a55-4f3a-8c65-da6a0b0531c9\") " Mar 08 00:38:04 crc kubenswrapper[4762]: I0308 00:38:04.840704 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45" (OuterVolumeSpecName: "kube-api-access-4ln45") pod "28c72edf-8a55-4f3a-8c65-da6a0b0531c9" (UID: "28c72edf-8a55-4f3a-8c65-da6a0b0531c9"). InnerVolumeSpecName "kube-api-access-4ln45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:38:04 crc kubenswrapper[4762]: I0308 00:38:04.936600 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ln45\" (UniqueName: \"kubernetes.io/projected/28c72edf-8a55-4f3a-8c65-da6a0b0531c9-kube-api-access-4ln45\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:05 crc kubenswrapper[4762]: I0308 00:38:05.321682 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" Mar 08 00:38:05 crc kubenswrapper[4762]: I0308 00:38:05.324139 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548838-b4q6k" event={"ID":"28c72edf-8a55-4f3a-8c65-da6a0b0531c9","Type":"ContainerDied","Data":"bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8"} Mar 08 00:38:05 crc kubenswrapper[4762]: I0308 00:38:05.324366 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc6481a803c475168407f0bf49da93c1521c4225ec5391d3ca17820426d0d7c8" Mar 08 00:38:05 crc kubenswrapper[4762]: I0308 00:38:05.373821 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548832-ptz47"] Mar 08 00:38:05 crc kubenswrapper[4762]: I0308 00:38:05.383698 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548832-ptz47"] Mar 08 00:38:07 crc kubenswrapper[4762]: I0308 00:38:07.277664 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b5626c-25cc-4cfa-a896-4eb18325572f" path="/var/lib/kubelet/pods/91b5626c-25cc-4cfa-a896-4eb18325572f/volumes" Mar 08 00:38:07 crc kubenswrapper[4762]: I0308 00:38:07.785453 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:07 crc kubenswrapper[4762]: I0308 00:38:07.786057 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:07 crc kubenswrapper[4762]: I0308 00:38:07.857025 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:08 crc kubenswrapper[4762]: I0308 00:38:08.402706 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:08 crc kubenswrapper[4762]: I0308 00:38:08.471940 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:38:10 crc kubenswrapper[4762]: I0308 00:38:10.367165 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2sh8b" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="registry-server" containerID="cri-o://2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f" gracePeriod=2 Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.369454 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.379946 4762 generic.go:334] "Generic (PLEG): container finished" podID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerID="2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f" exitCode=0 Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.379988 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerDied","Data":"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f"} Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.380016 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2sh8b" event={"ID":"969fd5d1-3dca-415e-ad73-ee47d5d647f5","Type":"ContainerDied","Data":"1d01a68dcba688b6c81afea3aa9c7aa8cb2493c719300bec4096871b37649537"} Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.380035 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2sh8b" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.380041 4762 scope.go:117] "RemoveContainer" containerID="2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.419110 4762 scope.go:117] "RemoveContainer" containerID="ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.460043 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities\") pod \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.460163 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q7pc\" (UniqueName: \"kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc\") pod \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.460213 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content\") pod \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\" (UID: \"969fd5d1-3dca-415e-ad73-ee47d5d647f5\") " Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.460340 4762 scope.go:117] "RemoveContainer" containerID="ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.462187 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities" (OuterVolumeSpecName: "utilities") pod "969fd5d1-3dca-415e-ad73-ee47d5d647f5" (UID: "969fd5d1-3dca-415e-ad73-ee47d5d647f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.470259 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc" (OuterVolumeSpecName: "kube-api-access-8q7pc") pod "969fd5d1-3dca-415e-ad73-ee47d5d647f5" (UID: "969fd5d1-3dca-415e-ad73-ee47d5d647f5"). InnerVolumeSpecName "kube-api-access-8q7pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.497542 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "969fd5d1-3dca-415e-ad73-ee47d5d647f5" (UID: "969fd5d1-3dca-415e-ad73-ee47d5d647f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.505279 4762 scope.go:117] "RemoveContainer" containerID="2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f" Mar 08 00:38:11 crc kubenswrapper[4762]: E0308 00:38:11.505981 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f\": container with ID starting with 2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f not found: ID does not exist" containerID="2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.506093 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f"} err="failed to get container status \"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f\": rpc error: code = NotFound desc = could not find container \"2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f\": container with ID starting with 2342ba64de7d5ba813d1f00aa2001b98d0cca5b6603b8f4622f049ae3fcfdd2f not found: ID does not exist" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.506184 4762 scope.go:117] "RemoveContainer" containerID="ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c" Mar 08 00:38:11 crc kubenswrapper[4762]: E0308 00:38:11.506578 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c\": container with ID starting with ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c not found: ID does not exist" containerID="ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.506621 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c"} err="failed to get container status \"ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c\": rpc error: code = NotFound desc = could not find container \"ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c\": container with ID starting with ff00a5927e2034c3a9a0598bb73e8b2989c86876e69564af6256ec26b694379c not found: ID does not exist" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.506666 4762 scope.go:117] "RemoveContainer" containerID="ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761" Mar 08 00:38:11 crc kubenswrapper[4762]: E0308 00:38:11.506914 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761\": container with ID starting with ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761 not found: ID does not exist" containerID="ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.506940 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761"} err="failed to get container status \"ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761\": rpc error: code = NotFound desc = could not find container \"ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761\": container with ID starting with ab80cee53844aa627244c9c3ac69cf20f6ec147cdee134f630087cd81d262761 not found: ID does not exist" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.562452 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.563030 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q7pc\" (UniqueName: \"kubernetes.io/projected/969fd5d1-3dca-415e-ad73-ee47d5d647f5-kube-api-access-8q7pc\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.563181 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/969fd5d1-3dca-415e-ad73-ee47d5d647f5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.726078 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:38:11 crc kubenswrapper[4762]: I0308 00:38:11.732828 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2sh8b"] Mar 08 00:38:12 crc kubenswrapper[4762]: I0308 00:38:12.852315 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:38:12 crc kubenswrapper[4762]: I0308 00:38:12.853016 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:38:13 crc kubenswrapper[4762]: I0308 00:38:13.279311 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" path="/var/lib/kubelet/pods/969fd5d1-3dca-415e-ad73-ee47d5d647f5/volumes" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.097519 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9"] Mar 08 00:38:16 crc kubenswrapper[4762]: E0308 00:38:16.098302 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="registry-server" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098318 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="registry-server" Mar 08 00:38:16 crc kubenswrapper[4762]: E0308 00:38:16.098339 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="extract-content" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098346 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="extract-content" Mar 08 00:38:16 crc kubenswrapper[4762]: E0308 00:38:16.098359 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="extract-utilities" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098366 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="extract-utilities" Mar 08 00:38:16 crc kubenswrapper[4762]: E0308 00:38:16.098381 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c72edf-8a55-4f3a-8c65-da6a0b0531c9" containerName="oc" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098387 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c72edf-8a55-4f3a-8c65-da6a0b0531c9" containerName="oc" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098540 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="969fd5d1-3dca-415e-ad73-ee47d5d647f5" containerName="registry-server" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.098560 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c72edf-8a55-4f3a-8c65-da6a0b0531c9" containerName="oc" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.099664 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.105248 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.112576 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9"] Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.251748 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.252182 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.252386 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksg5l\" (UniqueName: \"kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.354204 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.354304 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.354354 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksg5l\" (UniqueName: \"kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.354916 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.355667 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.377583 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksg5l\" (UniqueName: \"kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.427101 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:16 crc kubenswrapper[4762]: I0308 00:38:16.920841 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9"] Mar 08 00:38:17 crc kubenswrapper[4762]: I0308 00:38:17.452458 4762 generic.go:334] "Generic (PLEG): container finished" podID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerID="305e2594b37435623e73befea7d9ece504005c3c75c639b78fd4a90be68ff06e" exitCode=0 Mar 08 00:38:17 crc kubenswrapper[4762]: I0308 00:38:17.453019 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" event={"ID":"86ebd292-c9a1-4ae0-ab20-192155a862d6","Type":"ContainerDied","Data":"305e2594b37435623e73befea7d9ece504005c3c75c639b78fd4a90be68ff06e"} Mar 08 00:38:17 crc kubenswrapper[4762]: I0308 00:38:17.453073 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" event={"ID":"86ebd292-c9a1-4ae0-ab20-192155a862d6","Type":"ContainerStarted","Data":"b70849de6d26cbf7f83e294051637481033f56ccf733813297ebc49f95498432"} Mar 08 00:38:19 crc kubenswrapper[4762]: I0308 00:38:19.472055 4762 generic.go:334] "Generic (PLEG): container finished" podID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerID="4730aaf95adc4917b747cd7c9665a2d2fb3966feb5087389ce08afe9ca9d7759" exitCode=0 Mar 08 00:38:19 crc kubenswrapper[4762]: I0308 00:38:19.472183 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" event={"ID":"86ebd292-c9a1-4ae0-ab20-192155a862d6","Type":"ContainerDied","Data":"4730aaf95adc4917b747cd7c9665a2d2fb3966feb5087389ce08afe9ca9d7759"} Mar 08 00:38:20 crc kubenswrapper[4762]: I0308 00:38:20.484517 4762 generic.go:334] "Generic (PLEG): container finished" podID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerID="aa3aeba7327e7eacbedc5492629ec594fbff122e929bbd07e1e2699579b0182f" exitCode=0 Mar 08 00:38:20 crc kubenswrapper[4762]: I0308 00:38:20.484563 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" event={"ID":"86ebd292-c9a1-4ae0-ab20-192155a862d6","Type":"ContainerDied","Data":"aa3aeba7327e7eacbedc5492629ec594fbff122e929bbd07e1e2699579b0182f"} Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.830679 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.945803 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util\") pod \"86ebd292-c9a1-4ae0-ab20-192155a862d6\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.945892 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksg5l\" (UniqueName: \"kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l\") pod \"86ebd292-c9a1-4ae0-ab20-192155a862d6\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.945941 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle\") pod \"86ebd292-c9a1-4ae0-ab20-192155a862d6\" (UID: \"86ebd292-c9a1-4ae0-ab20-192155a862d6\") " Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.946846 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle" (OuterVolumeSpecName: "bundle") pod "86ebd292-c9a1-4ae0-ab20-192155a862d6" (UID: "86ebd292-c9a1-4ae0-ab20-192155a862d6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.953675 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l" (OuterVolumeSpecName: "kube-api-access-ksg5l") pod "86ebd292-c9a1-4ae0-ab20-192155a862d6" (UID: "86ebd292-c9a1-4ae0-ab20-192155a862d6"). InnerVolumeSpecName "kube-api-access-ksg5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:38:21 crc kubenswrapper[4762]: I0308 00:38:21.964265 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util" (OuterVolumeSpecName: "util") pod "86ebd292-c9a1-4ae0-ab20-192155a862d6" (UID: "86ebd292-c9a1-4ae0-ab20-192155a862d6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.047691 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.047743 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksg5l\" (UniqueName: \"kubernetes.io/projected/86ebd292-c9a1-4ae0-ab20-192155a862d6-kube-api-access-ksg5l\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.047778 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/86ebd292-c9a1-4ae0-ab20-192155a862d6-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.500046 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" event={"ID":"86ebd292-c9a1-4ae0-ab20-192155a862d6","Type":"ContainerDied","Data":"b70849de6d26cbf7f83e294051637481033f56ccf733813297ebc49f95498432"} Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.500092 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70849de6d26cbf7f83e294051637481033f56ccf733813297ebc49f95498432" Mar 08 00:38:22 crc kubenswrapper[4762]: I0308 00:38:22.500120 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.418896 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:23 crc kubenswrapper[4762]: E0308 00:38:23.419863 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="util" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.419893 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="util" Mar 08 00:38:23 crc kubenswrapper[4762]: E0308 00:38:23.419924 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="extract" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.419936 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="extract" Mar 08 00:38:23 crc kubenswrapper[4762]: E0308 00:38:23.419963 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="pull" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.419976 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="pull" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.420179 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ebd292-c9a1-4ae0-ab20-192155a862d6" containerName="extract" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.422001 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.442127 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.466796 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.466864 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.466927 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tst7\" (UniqueName: \"kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.568103 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.568167 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.568231 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tst7\" (UniqueName: \"kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.568910 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.568954 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.602048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tst7\" (UniqueName: \"kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7\") pod \"community-operators-2wrzk\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:23 crc kubenswrapper[4762]: I0308 00:38:23.752540 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:24 crc kubenswrapper[4762]: I0308 00:38:24.237820 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:24 crc kubenswrapper[4762]: I0308 00:38:24.516316 4762 generic.go:334] "Generic (PLEG): container finished" podID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerID="a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc" exitCode=0 Mar 08 00:38:24 crc kubenswrapper[4762]: I0308 00:38:24.516381 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerDied","Data":"a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc"} Mar 08 00:38:24 crc kubenswrapper[4762]: I0308 00:38:24.516935 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerStarted","Data":"bc5e5f9a6c4b72cb21c4908e616adad0b0b86c037804de8bdb32619f7937272f"} Mar 08 00:38:25 crc kubenswrapper[4762]: I0308 00:38:25.523742 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerStarted","Data":"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5"} Mar 08 00:38:26 crc kubenswrapper[4762]: I0308 00:38:26.537315 4762 generic.go:334] "Generic (PLEG): container finished" podID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerID="7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5" exitCode=0 Mar 08 00:38:26 crc kubenswrapper[4762]: I0308 00:38:26.537649 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerDied","Data":"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5"} Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.126856 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc"] Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.127776 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.129732 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-d4w6c" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.129900 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.130946 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.137918 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc"] Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.226263 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gjvn\" (UniqueName: \"kubernetes.io/projected/5d4fac50-a9cd-48c4-897d-03de9b1454be-kube-api-access-2gjvn\") pod \"nmstate-operator-75c5dccd6c-zdlpc\" (UID: \"5d4fac50-a9cd-48c4-897d-03de9b1454be\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.328002 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gjvn\" (UniqueName: \"kubernetes.io/projected/5d4fac50-a9cd-48c4-897d-03de9b1454be-kube-api-access-2gjvn\") pod \"nmstate-operator-75c5dccd6c-zdlpc\" (UID: \"5d4fac50-a9cd-48c4-897d-03de9b1454be\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.351847 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gjvn\" (UniqueName: \"kubernetes.io/projected/5d4fac50-a9cd-48c4-897d-03de9b1454be-kube-api-access-2gjvn\") pod \"nmstate-operator-75c5dccd6c-zdlpc\" (UID: \"5d4fac50-a9cd-48c4-897d-03de9b1454be\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.503449 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.581218 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerStarted","Data":"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b"} Mar 08 00:38:27 crc kubenswrapper[4762]: I0308 00:38:27.622363 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2wrzk" podStartSLOduration=1.9679275459999999 podStartE2EDuration="4.622346135s" podCreationTimestamp="2026-03-08 00:38:23 +0000 UTC" firstStartedPulling="2026-03-08 00:38:24.517921305 +0000 UTC m=+925.992065649" lastFinishedPulling="2026-03-08 00:38:27.172339893 +0000 UTC m=+928.646484238" observedRunningTime="2026-03-08 00:38:27.6160885 +0000 UTC m=+929.090232844" watchObservedRunningTime="2026-03-08 00:38:27.622346135 +0000 UTC m=+929.096490479" Mar 08 00:38:28 crc kubenswrapper[4762]: I0308 00:38:28.106073 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc"] Mar 08 00:38:28 crc kubenswrapper[4762]: W0308 00:38:28.112820 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d4fac50_a9cd_48c4_897d_03de9b1454be.slice/crio-dc63d96307e1576fcc5ef120663b4a8209c59a292febe726aee255eb128e99ba WatchSource:0}: Error finding container dc63d96307e1576fcc5ef120663b4a8209c59a292febe726aee255eb128e99ba: Status 404 returned error can't find the container with id dc63d96307e1576fcc5ef120663b4a8209c59a292febe726aee255eb128e99ba Mar 08 00:38:28 crc kubenswrapper[4762]: I0308 00:38:28.591246 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" event={"ID":"5d4fac50-a9cd-48c4-897d-03de9b1454be","Type":"ContainerStarted","Data":"dc63d96307e1576fcc5ef120663b4a8209c59a292febe726aee255eb128e99ba"} Mar 08 00:38:31 crc kubenswrapper[4762]: I0308 00:38:31.616667 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" event={"ID":"5d4fac50-a9cd-48c4-897d-03de9b1454be","Type":"ContainerStarted","Data":"1f160cfcdfd05d6f213ffa0ff199fcf3387f5c6542ff415d2d84fc95e017fa79"} Mar 08 00:38:31 crc kubenswrapper[4762]: I0308 00:38:31.644183 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-zdlpc" podStartSLOduration=2.240858044 podStartE2EDuration="4.644160217s" podCreationTimestamp="2026-03-08 00:38:27 +0000 UTC" firstStartedPulling="2026-03-08 00:38:28.115616442 +0000 UTC m=+929.589760786" lastFinishedPulling="2026-03-08 00:38:30.518918615 +0000 UTC m=+931.993062959" observedRunningTime="2026-03-08 00:38:31.636586932 +0000 UTC m=+933.110731326" watchObservedRunningTime="2026-03-08 00:38:31.644160217 +0000 UTC m=+933.118304601" Mar 08 00:38:33 crc kubenswrapper[4762]: I0308 00:38:33.753255 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:33 crc kubenswrapper[4762]: I0308 00:38:33.753721 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:33 crc kubenswrapper[4762]: I0308 00:38:33.824799 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:34 crc kubenswrapper[4762]: I0308 00:38:34.695486 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:35 crc kubenswrapper[4762]: I0308 00:38:35.809077 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:36 crc kubenswrapper[4762]: I0308 00:38:36.656162 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2wrzk" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="registry-server" containerID="cri-o://0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b" gracePeriod=2 Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.085116 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.176807 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities\") pod \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.176941 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content\") pod \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.177058 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tst7\" (UniqueName: \"kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7\") pod \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\" (UID: \"f006a27f-93e1-4ced-ae70-2c8f6701c01e\") " Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.177810 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities" (OuterVolumeSpecName: "utilities") pod "f006a27f-93e1-4ced-ae70-2c8f6701c01e" (UID: "f006a27f-93e1-4ced-ae70-2c8f6701c01e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.182978 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7" (OuterVolumeSpecName: "kube-api-access-2tst7") pod "f006a27f-93e1-4ced-ae70-2c8f6701c01e" (UID: "f006a27f-93e1-4ced-ae70-2c8f6701c01e"). InnerVolumeSpecName "kube-api-access-2tst7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.202188 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-2fg7x"] Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.202537 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="extract-utilities" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.202557 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="extract-utilities" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.202575 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="extract-content" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.202583 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="extract-content" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.202606 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="registry-server" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.202614 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="registry-server" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.202790 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerName="registry-server" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.203660 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.207770 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-85g64" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.207916 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-2fg7x"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.228350 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xtp5w"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.229552 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.243027 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.243909 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.248812 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.278308 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwkfb\" (UniqueName: \"kubernetes.io/projected/5f120575-fb49-4228-a522-8d5182663b94-kube-api-access-qwkfb\") pod \"nmstate-metrics-69594cc75-2fg7x\" (UID: \"5f120575-fb49-4228-a522-8d5182663b94\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.278795 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tst7\" (UniqueName: \"kubernetes.io/projected/f006a27f-93e1-4ced-ae70-2c8f6701c01e-kube-api-access-2tst7\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.278813 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.315264 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f006a27f-93e1-4ced-ae70-2c8f6701c01e" (UID: "f006a27f-93e1-4ced-ae70-2c8f6701c01e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.317288 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387207 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j4kj\" (UniqueName: \"kubernetes.io/projected/ebd76fbf-3a5c-409a-9c6c-5052042a769c-kube-api-access-4j4kj\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387263 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwkfb\" (UniqueName: \"kubernetes.io/projected/5f120575-fb49-4228-a522-8d5182663b94-kube-api-access-qwkfb\") pod \"nmstate-metrics-69594cc75-2fg7x\" (UID: \"5f120575-fb49-4228-a522-8d5182663b94\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387313 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7mg2\" (UniqueName: \"kubernetes.io/projected/274d72c4-da34-4213-9aa4-daa52cf6668f-kube-api-access-w7mg2\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387346 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-dbus-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387379 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-nmstate-lock\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387458 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-ovs-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387536 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.387598 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f006a27f-93e1-4ced-ae70-2c8f6701c01e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.391688 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.405235 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.405442 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.406990 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-6n5wt" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.409095 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.409523 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.421816 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwkfb\" (UniqueName: \"kubernetes.io/projected/5f120575-fb49-4228-a522-8d5182663b94-kube-api-access-qwkfb\") pod \"nmstate-metrics-69594cc75-2fg7x\" (UID: \"5f120575-fb49-4228-a522-8d5182663b94\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490606 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490670 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4j4kj\" (UniqueName: \"kubernetes.io/projected/ebd76fbf-3a5c-409a-9c6c-5052042a769c-kube-api-access-4j4kj\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490702 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7mg2\" (UniqueName: \"kubernetes.io/projected/274d72c4-da34-4213-9aa4-daa52cf6668f-kube-api-access-w7mg2\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490732 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-dbus-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490777 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqht\" (UniqueName: \"kubernetes.io/projected/d4b2992c-176e-427f-8126-78b2f3992745-kube-api-access-lxqht\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490801 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-nmstate-lock\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490842 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4b2992c-176e-427f-8126-78b2f3992745-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490870 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-ovs-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.490903 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b2992c-176e-427f-8126-78b2f3992745-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.491010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-nmstate-lock\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.491058 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-ovs-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.491113 4762 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.491188 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair podName:274d72c4-da34-4213-9aa4-daa52cf6668f nodeName:}" failed. No retries permitted until 2026-03-08 00:38:37.991149893 +0000 UTC m=+939.465294257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair") pod "nmstate-webhook-786f45cff4-2fjlb" (UID: "274d72c4-da34-4213-9aa4-daa52cf6668f") : secret "openshift-nmstate-webhook" not found Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.491679 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ebd76fbf-3a5c-409a-9c6c-5052042a769c-dbus-socket\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.510890 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7mg2\" (UniqueName: \"kubernetes.io/projected/274d72c4-da34-4213-9aa4-daa52cf6668f-kube-api-access-w7mg2\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.516362 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.527037 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4j4kj\" (UniqueName: \"kubernetes.io/projected/ebd76fbf-3a5c-409a-9c6c-5052042a769c-kube-api-access-4j4kj\") pod \"nmstate-handler-xtp5w\" (UID: \"ebd76fbf-3a5c-409a-9c6c-5052042a769c\") " pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.568371 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.569451 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.570223 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.589956 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.592438 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxqht\" (UniqueName: \"kubernetes.io/projected/d4b2992c-176e-427f-8126-78b2f3992745-kube-api-access-lxqht\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.592491 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4b2992c-176e-427f-8126-78b2f3992745-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.592532 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b2992c-176e-427f-8126-78b2f3992745-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.593430 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d4b2992c-176e-427f-8126-78b2f3992745-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.604813 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d4b2992c-176e-427f-8126-78b2f3992745-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.620949 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxqht\" (UniqueName: \"kubernetes.io/projected/d4b2992c-176e-427f-8126-78b2f3992745-kube-api-access-lxqht\") pod \"nmstate-console-plugin-5dcbbd79cf-xkpln\" (UID: \"d4b2992c-176e-427f-8126-78b2f3992745\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.672194 4762 generic.go:334] "Generic (PLEG): container finished" podID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" containerID="0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b" exitCode=0 Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.672240 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerDied","Data":"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b"} Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.672268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wrzk" event={"ID":"f006a27f-93e1-4ced-ae70-2c8f6701c01e","Type":"ContainerDied","Data":"bc5e5f9a6c4b72cb21c4908e616adad0b0b86c037804de8bdb32619f7937272f"} Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.672283 4762 scope.go:117] "RemoveContainer" containerID="0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.672398 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wrzk" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.677350 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xtp5w" event={"ID":"ebd76fbf-3a5c-409a-9c6c-5052042a769c","Type":"ContainerStarted","Data":"291e81aa9fa9e5810698c3a036df3db228f07aaa289efe3302e9cf284e76fbbb"} Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.694062 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.698947 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2wrzk"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.699048 4762 scope.go:117] "RemoveContainer" containerID="7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708227 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708324 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708351 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708418 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708509 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.708642 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lcvm\" (UniqueName: \"kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.713640 4762 scope.go:117] "RemoveContainer" containerID="a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.730736 4762 scope.go:117] "RemoveContainer" containerID="0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.731213 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b\": container with ID starting with 0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b not found: ID does not exist" containerID="0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.731284 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b"} err="failed to get container status \"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b\": rpc error: code = NotFound desc = could not find container \"0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b\": container with ID starting with 0608c28286845af72ec6e0f455bb126de6983cdc2e5f728ff815e0638b37454b not found: ID does not exist" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.731323 4762 scope.go:117] "RemoveContainer" containerID="7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.731664 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5\": container with ID starting with 7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5 not found: ID does not exist" containerID="7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.731696 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5"} err="failed to get container status \"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5\": rpc error: code = NotFound desc = could not find container \"7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5\": container with ID starting with 7908f321c3152885f8777be1c9f97ad1f9d32ed997a0e3428d8b8deebbe680b5 not found: ID does not exist" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.731715 4762 scope.go:117] "RemoveContainer" containerID="a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc" Mar 08 00:38:37 crc kubenswrapper[4762]: E0308 00:38:37.732011 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc\": container with ID starting with a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc not found: ID does not exist" containerID="a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.732061 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc"} err="failed to get container status \"a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc\": rpc error: code = NotFound desc = could not find container \"a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc\": container with ID starting with a622ab9644a754522ab0c305c5e2cf994d8ac7e845756e7b75caf2ed394d8ecc not found: ID does not exist" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.741480 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.809968 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810034 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810054 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810101 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810154 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810179 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.810204 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lcvm\" (UniqueName: \"kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.812271 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.812698 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.812914 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.814374 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.816455 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.821598 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.835178 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-2fg7x"] Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.842357 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lcvm\" (UniqueName: \"kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm\") pod \"console-55d845484c-b9ht8\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:37 crc kubenswrapper[4762]: W0308 00:38:37.848503 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f120575_fb49_4228_a522_8d5182663b94.slice/crio-9152ea38e31d384ea312ae81000d23d0a0ff3ec6efa2251ca40d6b83a3c8ad3d WatchSource:0}: Error finding container 9152ea38e31d384ea312ae81000d23d0a0ff3ec6efa2251ca40d6b83a3c8ad3d: Status 404 returned error can't find the container with id 9152ea38e31d384ea312ae81000d23d0a0ff3ec6efa2251ca40d6b83a3c8ad3d Mar 08 00:38:37 crc kubenswrapper[4762]: I0308 00:38:37.915868 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.013170 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.019569 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/274d72c4-da34-4213-9aa4-daa52cf6668f-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-2fjlb\" (UID: \"274d72c4-da34-4213-9aa4-daa52cf6668f\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.216911 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.230135 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln"] Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.344350 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:38:38 crc kubenswrapper[4762]: W0308 00:38:38.366946 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ec74e4c_c493_4d6e_a18d_16477eacccc4.slice/crio-220dd47c8cd2347d9b196177cc7f6ae0e43d52a0fecba739f1a25acd33a03f4b WatchSource:0}: Error finding container 220dd47c8cd2347d9b196177cc7f6ae0e43d52a0fecba739f1a25acd33a03f4b: Status 404 returned error can't find the container with id 220dd47c8cd2347d9b196177cc7f6ae0e43d52a0fecba739f1a25acd33a03f4b Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.658275 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb"] Mar 08 00:38:38 crc kubenswrapper[4762]: W0308 00:38:38.664381 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod274d72c4_da34_4213_9aa4_daa52cf6668f.slice/crio-8622deac4d87612ddf7e6c0f9d474292dc7c9033905921f4c0ab37cc7d29fcab WatchSource:0}: Error finding container 8622deac4d87612ddf7e6c0f9d474292dc7c9033905921f4c0ab37cc7d29fcab: Status 404 returned error can't find the container with id 8622deac4d87612ddf7e6c0f9d474292dc7c9033905921f4c0ab37cc7d29fcab Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.687011 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" event={"ID":"d4b2992c-176e-427f-8126-78b2f3992745","Type":"ContainerStarted","Data":"1822b1db137bfbe4a91f5a9bbfd1e55318c52004c14d50a124e296f970cb68c3"} Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.688225 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" event={"ID":"274d72c4-da34-4213-9aa4-daa52cf6668f","Type":"ContainerStarted","Data":"8622deac4d87612ddf7e6c0f9d474292dc7c9033905921f4c0ab37cc7d29fcab"} Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.689844 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55d845484c-b9ht8" event={"ID":"5ec74e4c-c493-4d6e-a18d-16477eacccc4","Type":"ContainerStarted","Data":"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760"} Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.689895 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55d845484c-b9ht8" event={"ID":"5ec74e4c-c493-4d6e-a18d-16477eacccc4","Type":"ContainerStarted","Data":"220dd47c8cd2347d9b196177cc7f6ae0e43d52a0fecba739f1a25acd33a03f4b"} Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.691429 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" event={"ID":"5f120575-fb49-4228-a522-8d5182663b94","Type":"ContainerStarted","Data":"9152ea38e31d384ea312ae81000d23d0a0ff3ec6efa2251ca40d6b83a3c8ad3d"} Mar 08 00:38:38 crc kubenswrapper[4762]: I0308 00:38:38.708084 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-55d845484c-b9ht8" podStartSLOduration=1.708064598 podStartE2EDuration="1.708064598s" podCreationTimestamp="2026-03-08 00:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:38:38.705631356 +0000 UTC m=+940.179775700" watchObservedRunningTime="2026-03-08 00:38:38.708064598 +0000 UTC m=+940.182208942" Mar 08 00:38:39 crc kubenswrapper[4762]: I0308 00:38:39.274609 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f006a27f-93e1-4ced-ae70-2c8f6701c01e" path="/var/lib/kubelet/pods/f006a27f-93e1-4ced-ae70-2c8f6701c01e/volumes" Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.711321 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xtp5w" event={"ID":"ebd76fbf-3a5c-409a-9c6c-5052042a769c","Type":"ContainerStarted","Data":"d26f4bbec0686f0ddf624cc8ae60cc675cd2c6fc7ea0fffa086e1a1f45166e60"} Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.712112 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.714514 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" event={"ID":"5f120575-fb49-4228-a522-8d5182663b94","Type":"ContainerStarted","Data":"edf4c161ad132c6f0d82f4afd7ff98b73e5bebd7df93813a85e089b9fae4590f"} Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.716612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" event={"ID":"d4b2992c-176e-427f-8126-78b2f3992745","Type":"ContainerStarted","Data":"a3e9d820989b427cb670a445c52f79ad6f2e8f1db24f2aad4e6364be77bb72f6"} Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.719843 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" event={"ID":"274d72c4-da34-4213-9aa4-daa52cf6668f","Type":"ContainerStarted","Data":"02e2fcda8def601759109d9b5bf79ca21536009f924f69554e18e86f09b1aec6"} Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.720372 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.727122 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xtp5w" podStartSLOduration=1.061006134 podStartE2EDuration="4.727108063s" podCreationTimestamp="2026-03-08 00:38:37 +0000 UTC" firstStartedPulling="2026-03-08 00:38:37.617625559 +0000 UTC m=+939.091769903" lastFinishedPulling="2026-03-08 00:38:41.283727488 +0000 UTC m=+942.757871832" observedRunningTime="2026-03-08 00:38:41.725723442 +0000 UTC m=+943.199867786" watchObservedRunningTime="2026-03-08 00:38:41.727108063 +0000 UTC m=+943.201252397" Mar 08 00:38:41 crc kubenswrapper[4762]: I0308 00:38:41.753382 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-xkpln" podStartSLOduration=1.709452789 podStartE2EDuration="4.753359083s" podCreationTimestamp="2026-03-08 00:38:37 +0000 UTC" firstStartedPulling="2026-03-08 00:38:38.2332851 +0000 UTC m=+939.707429444" lastFinishedPulling="2026-03-08 00:38:41.277191394 +0000 UTC m=+942.751335738" observedRunningTime="2026-03-08 00:38:41.747015004 +0000 UTC m=+943.221159348" watchObservedRunningTime="2026-03-08 00:38:41.753359083 +0000 UTC m=+943.227503447" Mar 08 00:38:42 crc kubenswrapper[4762]: I0308 00:38:42.851833 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:38:42 crc kubenswrapper[4762]: I0308 00:38:42.852318 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:38:42 crc kubenswrapper[4762]: I0308 00:38:42.852367 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:38:42 crc kubenswrapper[4762]: I0308 00:38:42.853040 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:38:42 crc kubenswrapper[4762]: I0308 00:38:42.853096 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4" gracePeriod=600 Mar 08 00:38:43 crc kubenswrapper[4762]: E0308 00:38:43.052570 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e384d81_de01_4ab9_b10b_2c9c5b45422c.slice/crio-conmon-c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4.scope\": RecentStats: unable to find data in memory cache]" Mar 08 00:38:43 crc kubenswrapper[4762]: I0308 00:38:43.734332 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4" exitCode=0 Mar 08 00:38:43 crc kubenswrapper[4762]: I0308 00:38:43.734348 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4"} Mar 08 00:38:43 crc kubenswrapper[4762]: I0308 00:38:43.734907 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30"} Mar 08 00:38:43 crc kubenswrapper[4762]: I0308 00:38:43.734931 4762 scope.go:117] "RemoveContainer" containerID="6e1d379555c081f977d5be76e9ba3af1b94dc051410584368d425f49016b85e4" Mar 08 00:38:43 crc kubenswrapper[4762]: I0308 00:38:43.751077 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" podStartSLOduration=4.134557737 podStartE2EDuration="6.751060001s" podCreationTimestamp="2026-03-08 00:38:37 +0000 UTC" firstStartedPulling="2026-03-08 00:38:38.666583185 +0000 UTC m=+940.140727529" lastFinishedPulling="2026-03-08 00:38:41.283085449 +0000 UTC m=+942.757229793" observedRunningTime="2026-03-08 00:38:41.770558803 +0000 UTC m=+943.244703147" watchObservedRunningTime="2026-03-08 00:38:43.751060001 +0000 UTC m=+945.225204345" Mar 08 00:38:44 crc kubenswrapper[4762]: I0308 00:38:44.748300 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" event={"ID":"5f120575-fb49-4228-a522-8d5182663b94","Type":"ContainerStarted","Data":"cf9c3615518be7e23271206ebebc5e698b6d10d4fc45dabe7fe13bd3b8323760"} Mar 08 00:38:44 crc kubenswrapper[4762]: I0308 00:38:44.774799 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-2fg7x" podStartSLOduration=1.42761228 podStartE2EDuration="7.774752588s" podCreationTimestamp="2026-03-08 00:38:37 +0000 UTC" firstStartedPulling="2026-03-08 00:38:37.853646567 +0000 UTC m=+939.327790911" lastFinishedPulling="2026-03-08 00:38:44.200786865 +0000 UTC m=+945.674931219" observedRunningTime="2026-03-08 00:38:44.769231034 +0000 UTC m=+946.243375438" watchObservedRunningTime="2026-03-08 00:38:44.774752588 +0000 UTC m=+946.248896972" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.707052 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.708576 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.724317 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.831553 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.831631 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.831712 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkdn6\" (UniqueName: \"kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.932856 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.932916 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.932963 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkdn6\" (UniqueName: \"kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.933790 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.933947 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:45 crc kubenswrapper[4762]: I0308 00:38:45.963599 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkdn6\" (UniqueName: \"kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6\") pod \"certified-operators-p277g\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:46 crc kubenswrapper[4762]: I0308 00:38:46.041640 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:46 crc kubenswrapper[4762]: I0308 00:38:46.487483 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:38:46 crc kubenswrapper[4762]: W0308 00:38:46.496946 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cb8b7cc_4c4e_457e_bf07_fc430e8de95d.slice/crio-0227ee46e1a2d831c69f93635d218e8c0e50b0c264b63aac8819ead9bb6aabd9 WatchSource:0}: Error finding container 0227ee46e1a2d831c69f93635d218e8c0e50b0c264b63aac8819ead9bb6aabd9: Status 404 returned error can't find the container with id 0227ee46e1a2d831c69f93635d218e8c0e50b0c264b63aac8819ead9bb6aabd9 Mar 08 00:38:46 crc kubenswrapper[4762]: I0308 00:38:46.763296 4762 generic.go:334] "Generic (PLEG): container finished" podID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerID="10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28" exitCode=0 Mar 08 00:38:46 crc kubenswrapper[4762]: I0308 00:38:46.763340 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerDied","Data":"10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28"} Mar 08 00:38:46 crc kubenswrapper[4762]: I0308 00:38:46.763365 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerStarted","Data":"0227ee46e1a2d831c69f93635d218e8c0e50b0c264b63aac8819ead9bb6aabd9"} Mar 08 00:38:47 crc kubenswrapper[4762]: I0308 00:38:47.614700 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 00:38:47 crc kubenswrapper[4762]: I0308 00:38:47.916902 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:47 crc kubenswrapper[4762]: I0308 00:38:47.917351 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:47 crc kubenswrapper[4762]: I0308 00:38:47.925981 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:49 crc kubenswrapper[4762]: I0308 00:38:49.302815 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:38:50 crc kubenswrapper[4762]: I0308 00:38:50.036965 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:38:50 crc kubenswrapper[4762]: I0308 00:38:50.301623 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerStarted","Data":"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862"} Mar 08 00:38:51 crc kubenswrapper[4762]: I0308 00:38:51.312601 4762 generic.go:334] "Generic (PLEG): container finished" podID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerID="c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862" exitCode=0 Mar 08 00:38:51 crc kubenswrapper[4762]: I0308 00:38:51.312669 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerDied","Data":"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862"} Mar 08 00:38:53 crc kubenswrapper[4762]: I0308 00:38:53.332140 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerStarted","Data":"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b"} Mar 08 00:38:56 crc kubenswrapper[4762]: I0308 00:38:56.042325 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:56 crc kubenswrapper[4762]: I0308 00:38:56.043594 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:56 crc kubenswrapper[4762]: I0308 00:38:56.119395 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:38:56 crc kubenswrapper[4762]: I0308 00:38:56.146445 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p277g" podStartSLOduration=6.168208241 podStartE2EDuration="11.146419431s" podCreationTimestamp="2026-03-08 00:38:45 +0000 UTC" firstStartedPulling="2026-03-08 00:38:46.764613023 +0000 UTC m=+948.238757367" lastFinishedPulling="2026-03-08 00:38:51.742824203 +0000 UTC m=+953.216968557" observedRunningTime="2026-03-08 00:38:53.353507569 +0000 UTC m=+954.827651913" watchObservedRunningTime="2026-03-08 00:38:56.146419431 +0000 UTC m=+957.620563775" Mar 08 00:38:58 crc kubenswrapper[4762]: I0308 00:38:58.227213 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 00:39:00 crc kubenswrapper[4762]: I0308 00:39:00.228713 4762 scope.go:117] "RemoveContainer" containerID="f20bb89b9022b2a026baa790ada1bfe94c58862630f89a75cc361452fc72a043" Mar 08 00:39:06 crc kubenswrapper[4762]: I0308 00:39:06.109923 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:39:06 crc kubenswrapper[4762]: I0308 00:39:06.174578 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:39:06 crc kubenswrapper[4762]: I0308 00:39:06.435346 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p277g" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="registry-server" containerID="cri-o://a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b" gracePeriod=2 Mar 08 00:39:06 crc kubenswrapper[4762]: I0308 00:39:06.945748 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.063065 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities\") pod \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.063166 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkdn6\" (UniqueName: \"kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6\") pod \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.063255 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content\") pod \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\" (UID: \"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d\") " Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.064297 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities" (OuterVolumeSpecName: "utilities") pod "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" (UID: "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.069796 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6" (OuterVolumeSpecName: "kube-api-access-gkdn6") pod "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" (UID: "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d"). InnerVolumeSpecName "kube-api-access-gkdn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.117200 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" (UID: "9cb8b7cc-4c4e-457e-bf07-fc430e8de95d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.165565 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.165617 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkdn6\" (UniqueName: \"kubernetes.io/projected/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-kube-api-access-gkdn6\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.165635 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.448597 4762 generic.go:334] "Generic (PLEG): container finished" podID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerID="a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b" exitCode=0 Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.448649 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerDied","Data":"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b"} Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.448727 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p277g" event={"ID":"9cb8b7cc-4c4e-457e-bf07-fc430e8de95d","Type":"ContainerDied","Data":"0227ee46e1a2d831c69f93635d218e8c0e50b0c264b63aac8819ead9bb6aabd9"} Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.448669 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p277g" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.448783 4762 scope.go:117] "RemoveContainer" containerID="a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.484484 4762 scope.go:117] "RemoveContainer" containerID="c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.489652 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.498737 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p277g"] Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.510798 4762 scope.go:117] "RemoveContainer" containerID="10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.539649 4762 scope.go:117] "RemoveContainer" containerID="a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b" Mar 08 00:39:07 crc kubenswrapper[4762]: E0308 00:39:07.540142 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b\": container with ID starting with a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b not found: ID does not exist" containerID="a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.540175 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b"} err="failed to get container status \"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b\": rpc error: code = NotFound desc = could not find container \"a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b\": container with ID starting with a76b8cfcf7004ee3cf219ae84ff940b7f6a23bb4a4c077072f00c6565bfa987b not found: ID does not exist" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.540205 4762 scope.go:117] "RemoveContainer" containerID="c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862" Mar 08 00:39:07 crc kubenswrapper[4762]: E0308 00:39:07.540430 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862\": container with ID starting with c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862 not found: ID does not exist" containerID="c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.540456 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862"} err="failed to get container status \"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862\": rpc error: code = NotFound desc = could not find container \"c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862\": container with ID starting with c9e5921b4e5f6ba1ca0bb401c13979801ae8822a59a5015616f76ea39f1f9862 not found: ID does not exist" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.540468 4762 scope.go:117] "RemoveContainer" containerID="10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28" Mar 08 00:39:07 crc kubenswrapper[4762]: E0308 00:39:07.540634 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28\": container with ID starting with 10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28 not found: ID does not exist" containerID="10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28" Mar 08 00:39:07 crc kubenswrapper[4762]: I0308 00:39:07.540657 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28"} err="failed to get container status \"10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28\": rpc error: code = NotFound desc = could not find container \"10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28\": container with ID starting with 10237fe72ffc449b8dcba822502cd00a4e07590b5ab662417cb5f112871bfe28 not found: ID does not exist" Mar 08 00:39:09 crc kubenswrapper[4762]: I0308 00:39:09.297658 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" path="/var/lib/kubelet/pods/9cb8b7cc-4c4e-457e-bf07-fc430e8de95d/volumes" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.090684 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-842sk" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" containerID="cri-o://ea57468b23b3a2bcd77f1ca1079d68f46c95a77d17ad82b727e9d4bc6539a1f6" gracePeriod=15 Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.520610 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-842sk_ca82b7d9-bbba-4543-945b-e78923c1d3cf/console/0.log" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.520947 4762 generic.go:334] "Generic (PLEG): container finished" podID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerID="ea57468b23b3a2bcd77f1ca1079d68f46c95a77d17ad82b727e9d4bc6539a1f6" exitCode=2 Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.520979 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-842sk" event={"ID":"ca82b7d9-bbba-4543-945b-e78923c1d3cf","Type":"ContainerDied","Data":"ea57468b23b3a2bcd77f1ca1079d68f46c95a77d17ad82b727e9d4bc6539a1f6"} Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.631509 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-842sk_ca82b7d9-bbba-4543-945b-e78923c1d3cf/console/0.log" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.631609 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826015 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826239 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtxjs\" (UniqueName: \"kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826303 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826339 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826408 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826532 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.826581 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config\") pod \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\" (UID: \"ca82b7d9-bbba-4543-945b-e78923c1d3cf\") " Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.829662 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca" (OuterVolumeSpecName: "service-ca") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.829689 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config" (OuterVolumeSpecName: "console-config") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.829679 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.829860 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.843289 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.843512 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs" (OuterVolumeSpecName: "kube-api-access-rtxjs") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "kube-api-access-rtxjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.853994 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ca82b7d9-bbba-4543-945b-e78923c1d3cf" (UID: "ca82b7d9-bbba-4543-945b-e78923c1d3cf"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928690 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928725 4762 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928738 4762 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928746 4762 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ca82b7d9-bbba-4543-945b-e78923c1d3cf-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928757 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtxjs\" (UniqueName: \"kubernetes.io/projected/ca82b7d9-bbba-4543-945b-e78923c1d3cf-kube-api-access-rtxjs\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928766 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:15 crc kubenswrapper[4762]: I0308 00:39:15.928775 4762 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ca82b7d9-bbba-4543-945b-e78923c1d3cf-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.529590 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-842sk_ca82b7d9-bbba-4543-945b-e78923c1d3cf/console/0.log" Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.530092 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-842sk" event={"ID":"ca82b7d9-bbba-4543-945b-e78923c1d3cf","Type":"ContainerDied","Data":"40bf20bb4af8dbc20d6eac0e738b15cc87bafd6500471ed6914809cce7c56549"} Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.530138 4762 scope.go:117] "RemoveContainer" containerID="ea57468b23b3a2bcd77f1ca1079d68f46c95a77d17ad82b727e9d4bc6539a1f6" Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.530180 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-842sk" Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.572747 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:39:16 crc kubenswrapper[4762]: I0308 00:39:16.578279 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-842sk"] Mar 08 00:39:17 crc kubenswrapper[4762]: I0308 00:39:17.271710 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" path="/var/lib/kubelet/pods/ca82b7d9-bbba-4543-945b-e78923c1d3cf/volumes" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.120558 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv"] Mar 08 00:39:18 crc kubenswrapper[4762]: E0308 00:39:18.121671 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="extract-utilities" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.121876 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="extract-utilities" Mar 08 00:39:18 crc kubenswrapper[4762]: E0308 00:39:18.122325 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="extract-content" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.122449 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="extract-content" Mar 08 00:39:18 crc kubenswrapper[4762]: E0308 00:39:18.122549 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="registry-server" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.122636 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="registry-server" Mar 08 00:39:18 crc kubenswrapper[4762]: E0308 00:39:18.122715 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.122818 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.123075 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cb8b7cc-4c4e-457e-bf07-fc430e8de95d" containerName="registry-server" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.123183 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca82b7d9-bbba-4543-945b-e78923c1d3cf" containerName="console" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.124484 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.127118 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.136212 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv"] Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.266670 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.266747 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgfb9\" (UniqueName: \"kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.266818 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.369047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.369246 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.369316 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgfb9\" (UniqueName: \"kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.369853 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.370006 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.399658 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgfb9\" (UniqueName: \"kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.454261 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:18 crc kubenswrapper[4762]: I0308 00:39:18.966345 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv"] Mar 08 00:39:19 crc kubenswrapper[4762]: I0308 00:39:19.558936 4762 generic.go:334] "Generic (PLEG): container finished" podID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerID="6c91204c9f5cd2a9cf0911a524819bab5c605c7a959d3fa8bc9e6e7643c372c2" exitCode=0 Mar 08 00:39:19 crc kubenswrapper[4762]: I0308 00:39:19.558987 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" event={"ID":"7ef5c038-aa5e-4e6f-ac49-1fae6408849b","Type":"ContainerDied","Data":"6c91204c9f5cd2a9cf0911a524819bab5c605c7a959d3fa8bc9e6e7643c372c2"} Mar 08 00:39:19 crc kubenswrapper[4762]: I0308 00:39:19.559017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" event={"ID":"7ef5c038-aa5e-4e6f-ac49-1fae6408849b","Type":"ContainerStarted","Data":"89ece135af266fa15092c3aee76afffec8fb3b9a7b916f55cbeb5e5f38d68943"} Mar 08 00:39:21 crc kubenswrapper[4762]: I0308 00:39:21.574357 4762 generic.go:334] "Generic (PLEG): container finished" podID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerID="39fe90024dc1ea54b8a49b1222a781f88ef7f4b91020a7b81e0b4e090c7e04d5" exitCode=0 Mar 08 00:39:21 crc kubenswrapper[4762]: I0308 00:39:21.574431 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" event={"ID":"7ef5c038-aa5e-4e6f-ac49-1fae6408849b","Type":"ContainerDied","Data":"39fe90024dc1ea54b8a49b1222a781f88ef7f4b91020a7b81e0b4e090c7e04d5"} Mar 08 00:39:22 crc kubenswrapper[4762]: I0308 00:39:22.584533 4762 generic.go:334] "Generic (PLEG): container finished" podID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerID="2d4ad6337802f368b84307a62ea0d35c22ac9fed17487e9e150cad0cf912dfa7" exitCode=0 Mar 08 00:39:22 crc kubenswrapper[4762]: I0308 00:39:22.584657 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" event={"ID":"7ef5c038-aa5e-4e6f-ac49-1fae6408849b","Type":"ContainerDied","Data":"2d4ad6337802f368b84307a62ea0d35c22ac9fed17487e9e150cad0cf912dfa7"} Mar 08 00:39:23 crc kubenswrapper[4762]: I0308 00:39:23.963793 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.067053 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgfb9\" (UniqueName: \"kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9\") pod \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.067100 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle\") pod \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.067180 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util\") pod \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\" (UID: \"7ef5c038-aa5e-4e6f-ac49-1fae6408849b\") " Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.069623 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle" (OuterVolumeSpecName: "bundle") pod "7ef5c038-aa5e-4e6f-ac49-1fae6408849b" (UID: "7ef5c038-aa5e-4e6f-ac49-1fae6408849b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.073427 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9" (OuterVolumeSpecName: "kube-api-access-zgfb9") pod "7ef5c038-aa5e-4e6f-ac49-1fae6408849b" (UID: "7ef5c038-aa5e-4e6f-ac49-1fae6408849b"). InnerVolumeSpecName "kube-api-access-zgfb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.084604 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util" (OuterVolumeSpecName: "util") pod "7ef5c038-aa5e-4e6f-ac49-1fae6408849b" (UID: "7ef5c038-aa5e-4e6f-ac49-1fae6408849b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.169482 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgfb9\" (UniqueName: \"kubernetes.io/projected/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-kube-api-access-zgfb9\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.169552 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.169574 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7ef5c038-aa5e-4e6f-ac49-1fae6408849b-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.601808 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" event={"ID":"7ef5c038-aa5e-4e6f-ac49-1fae6408849b","Type":"ContainerDied","Data":"89ece135af266fa15092c3aee76afffec8fb3b9a7b916f55cbeb5e5f38d68943"} Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.602305 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89ece135af266fa15092c3aee76afffec8fb3b9a7b916f55cbeb5e5f38d68943" Mar 08 00:39:24 crc kubenswrapper[4762]: I0308 00:39:24.601941 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.451251 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2"] Mar 08 00:39:33 crc kubenswrapper[4762]: E0308 00:39:33.451826 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="util" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.451839 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="util" Mar 08 00:39:33 crc kubenswrapper[4762]: E0308 00:39:33.451854 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="extract" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.451860 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="extract" Mar 08 00:39:33 crc kubenswrapper[4762]: E0308 00:39:33.451877 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="pull" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.451883 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="pull" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.451986 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ef5c038-aa5e-4e6f-ac49-1fae6408849b" containerName="extract" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.452427 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.454214 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.455007 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.455925 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.456176 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nz6n7" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.456293 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.472870 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2"] Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.617158 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-apiservice-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.617208 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48l8p\" (UniqueName: \"kubernetes.io/projected/97490dfa-d4e5-4013-8a53-199f5872ea4c-kube-api-access-48l8p\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.617395 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-webhook-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.695483 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd"] Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.696551 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.700323 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.700453 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-d6f7n" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.711450 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.712572 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd"] Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.719116 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-apiservice-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.719166 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48l8p\" (UniqueName: \"kubernetes.io/projected/97490dfa-d4e5-4013-8a53-199f5872ea4c-kube-api-access-48l8p\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.719225 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-webhook-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.725158 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-webhook-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.737849 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/97490dfa-d4e5-4013-8a53-199f5872ea4c-apiservice-cert\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.754500 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48l8p\" (UniqueName: \"kubernetes.io/projected/97490dfa-d4e5-4013-8a53-199f5872ea4c-kube-api-access-48l8p\") pod \"metallb-operator-controller-manager-58b8966548-4d5g2\" (UID: \"97490dfa-d4e5-4013-8a53-199f5872ea4c\") " pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.766361 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.820189 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcq4\" (UniqueName: \"kubernetes.io/projected/cbdc8d75-414a-451a-b594-dc430abfcc09-kube-api-access-bkcq4\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.820283 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-apiservice-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.820321 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-webhook-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.921893 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcq4\" (UniqueName: \"kubernetes.io/projected/cbdc8d75-414a-451a-b594-dc430abfcc09-kube-api-access-bkcq4\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.921958 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-apiservice-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.922000 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-webhook-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.927846 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-apiservice-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.937844 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cbdc8d75-414a-451a-b594-dc430abfcc09-webhook-cert\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:33 crc kubenswrapper[4762]: I0308 00:39:33.938880 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcq4\" (UniqueName: \"kubernetes.io/projected/cbdc8d75-414a-451a-b594-dc430abfcc09-kube-api-access-bkcq4\") pod \"metallb-operator-webhook-server-6f659bb4d7-nxfzd\" (UID: \"cbdc8d75-414a-451a-b594-dc430abfcc09\") " pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:34 crc kubenswrapper[4762]: I0308 00:39:34.009779 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:34 crc kubenswrapper[4762]: I0308 00:39:34.275706 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2"] Mar 08 00:39:34 crc kubenswrapper[4762]: I0308 00:39:34.442956 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd"] Mar 08 00:39:34 crc kubenswrapper[4762]: W0308 00:39:34.456796 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbdc8d75_414a_451a_b594_dc430abfcc09.slice/crio-82e7611230b5525f03083fd1b76d2ef7b0858a9dfe58c5be2acd1a787c1c0f6e WatchSource:0}: Error finding container 82e7611230b5525f03083fd1b76d2ef7b0858a9dfe58c5be2acd1a787c1c0f6e: Status 404 returned error can't find the container with id 82e7611230b5525f03083fd1b76d2ef7b0858a9dfe58c5be2acd1a787c1c0f6e Mar 08 00:39:34 crc kubenswrapper[4762]: I0308 00:39:34.668922 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" event={"ID":"cbdc8d75-414a-451a-b594-dc430abfcc09","Type":"ContainerStarted","Data":"82e7611230b5525f03083fd1b76d2ef7b0858a9dfe58c5be2acd1a787c1c0f6e"} Mar 08 00:39:34 crc kubenswrapper[4762]: I0308 00:39:34.670334 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" event={"ID":"97490dfa-d4e5-4013-8a53-199f5872ea4c","Type":"ContainerStarted","Data":"57112f58ffbd3d959aa2f8a3cbe4b76d6bad767fae84450bcc40eb51706ed8a9"} Mar 08 00:39:37 crc kubenswrapper[4762]: I0308 00:39:37.693330 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" event={"ID":"97490dfa-d4e5-4013-8a53-199f5872ea4c","Type":"ContainerStarted","Data":"c98b9456d1cc1989c81e5979ad16f1d9dbd5bd434866f5818483f357d8933810"} Mar 08 00:39:37 crc kubenswrapper[4762]: I0308 00:39:37.693750 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:39:37 crc kubenswrapper[4762]: I0308 00:39:37.721957 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" podStartSLOduration=1.67243227 podStartE2EDuration="4.721938605s" podCreationTimestamp="2026-03-08 00:39:33 +0000 UTC" firstStartedPulling="2026-03-08 00:39:34.281466857 +0000 UTC m=+995.755611201" lastFinishedPulling="2026-03-08 00:39:37.330973182 +0000 UTC m=+998.805117536" observedRunningTime="2026-03-08 00:39:37.715335987 +0000 UTC m=+999.189480331" watchObservedRunningTime="2026-03-08 00:39:37.721938605 +0000 UTC m=+999.196082949" Mar 08 00:39:39 crc kubenswrapper[4762]: I0308 00:39:39.716474 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" event={"ID":"cbdc8d75-414a-451a-b594-dc430abfcc09","Type":"ContainerStarted","Data":"062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548"} Mar 08 00:39:39 crc kubenswrapper[4762]: I0308 00:39:39.716953 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:39:39 crc kubenswrapper[4762]: I0308 00:39:39.747989 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podStartSLOduration=2.132492822 podStartE2EDuration="6.747959931s" podCreationTimestamp="2026-03-08 00:39:33 +0000 UTC" firstStartedPulling="2026-03-08 00:39:34.460442275 +0000 UTC m=+995.934586619" lastFinishedPulling="2026-03-08 00:39:39.075909384 +0000 UTC m=+1000.550053728" observedRunningTime="2026-03-08 00:39:39.746332603 +0000 UTC m=+1001.220476947" watchObservedRunningTime="2026-03-08 00:39:39.747959931 +0000 UTC m=+1001.222104275" Mar 08 00:39:54 crc kubenswrapper[4762]: I0308 00:39:54.024243 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.158054 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548840-hjssb"] Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.160341 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.166245 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.168993 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.169860 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.173039 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548840-hjssb"] Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.262586 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npdf9\" (UniqueName: \"kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9\") pod \"auto-csr-approver-29548840-hjssb\" (UID: \"fe7a9f3e-b771-4b5f-93ed-3092375d617e\") " pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.364788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npdf9\" (UniqueName: \"kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9\") pod \"auto-csr-approver-29548840-hjssb\" (UID: \"fe7a9f3e-b771-4b5f-93ed-3092375d617e\") " pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.398366 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npdf9\" (UniqueName: \"kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9\") pod \"auto-csr-approver-29548840-hjssb\" (UID: \"fe7a9f3e-b771-4b5f-93ed-3092375d617e\") " pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.489826 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:00 crc kubenswrapper[4762]: I0308 00:40:00.956217 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548840-hjssb"] Mar 08 00:40:01 crc kubenswrapper[4762]: I0308 00:40:01.929238 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548840-hjssb" event={"ID":"fe7a9f3e-b771-4b5f-93ed-3092375d617e","Type":"ContainerStarted","Data":"905a21cd5385dbcb915e221361298ea5edf5c70f6a300c3a36759caf18c53fd9"} Mar 08 00:40:02 crc kubenswrapper[4762]: I0308 00:40:02.940401 4762 generic.go:334] "Generic (PLEG): container finished" podID="fe7a9f3e-b771-4b5f-93ed-3092375d617e" containerID="82bc28b1cfd51e7f29bdfcebcf3c0a11cd81837732695b690ade868be0942473" exitCode=0 Mar 08 00:40:02 crc kubenswrapper[4762]: I0308 00:40:02.940526 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548840-hjssb" event={"ID":"fe7a9f3e-b771-4b5f-93ed-3092375d617e","Type":"ContainerDied","Data":"82bc28b1cfd51e7f29bdfcebcf3c0a11cd81837732695b690ade868be0942473"} Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.327966 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.437350 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npdf9\" (UniqueName: \"kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9\") pod \"fe7a9f3e-b771-4b5f-93ed-3092375d617e\" (UID: \"fe7a9f3e-b771-4b5f-93ed-3092375d617e\") " Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.448916 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9" (OuterVolumeSpecName: "kube-api-access-npdf9") pod "fe7a9f3e-b771-4b5f-93ed-3092375d617e" (UID: "fe7a9f3e-b771-4b5f-93ed-3092375d617e"). InnerVolumeSpecName "kube-api-access-npdf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.539566 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npdf9\" (UniqueName: \"kubernetes.io/projected/fe7a9f3e-b771-4b5f-93ed-3092375d617e-kube-api-access-npdf9\") on node \"crc\" DevicePath \"\"" Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.961882 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548840-hjssb" event={"ID":"fe7a9f3e-b771-4b5f-93ed-3092375d617e","Type":"ContainerDied","Data":"905a21cd5385dbcb915e221361298ea5edf5c70f6a300c3a36759caf18c53fd9"} Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.962040 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548840-hjssb" Mar 08 00:40:04 crc kubenswrapper[4762]: I0308 00:40:04.961961 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="905a21cd5385dbcb915e221361298ea5edf5c70f6a300c3a36759caf18c53fd9" Mar 08 00:40:05 crc kubenswrapper[4762]: I0308 00:40:05.414966 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548834-9mnrm"] Mar 08 00:40:05 crc kubenswrapper[4762]: I0308 00:40:05.425162 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548834-9mnrm"] Mar 08 00:40:07 crc kubenswrapper[4762]: I0308 00:40:07.276984 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b177d8e3-9c43-497d-a486-e406692bd63f" path="/var/lib/kubelet/pods/b177d8e3-9c43-497d-a486-e406692bd63f/volumes" Mar 08 00:40:13 crc kubenswrapper[4762]: I0308 00:40:13.770877 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.597980 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz"] Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.598324 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe7a9f3e-b771-4b5f-93ed-3092375d617e" containerName="oc" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.598350 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe7a9f3e-b771-4b5f-93ed-3092375d617e" containerName="oc" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.598504 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe7a9f3e-b771-4b5f-93ed-3092375d617e" containerName="oc" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.599053 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.602909 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.602960 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-mbwhj" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.617069 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-4qgst"] Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.620326 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.623205 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.623336 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.623676 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz"] Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.630933 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.631005 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jzc\" (UniqueName: \"kubernetes.io/projected/7a1f5442-2f22-4dff-b59a-0a8233a83b41-kube-api-access-c7jzc\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.699040 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4j4bt"] Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.714014 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.719784 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-g8zzq" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.720100 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.720927 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.720951 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.733909 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-qgc88"] Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.734933 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jzc\" (UniqueName: \"kubernetes.io/projected/7a1f5442-2f22-4dff-b59a-0a8233a83b41-kube-api-access-c7jzc\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735007 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7ww2\" (UniqueName: \"kubernetes.io/projected/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-kube-api-access-s7ww2\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735039 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-reloader\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735076 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-startup\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735104 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735203 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metallb-excludel2\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735390 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp964\" (UniqueName: \"kubernetes.io/projected/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-kube-api-access-pp964\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735501 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-conf\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735551 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735725 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735774 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735844 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics-certs\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.735889 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-sockets\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.736091 4762 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.736146 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert podName:7a1f5442-2f22-4dff-b59a-0a8233a83b41 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:15.236130903 +0000 UTC m=+1036.710275247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert") pod "frr-k8s-webhook-server-7f989f654f-xrnnz" (UID: "7a1f5442-2f22-4dff-b59a-0a8233a83b41") : secret "frr-k8s-webhook-server-cert" not found Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.741871 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.757406 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.765893 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-qgc88"] Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.771869 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jzc\" (UniqueName: \"kubernetes.io/projected/7a1f5442-2f22-4dff-b59a-0a8233a83b41-kube-api-access-c7jzc\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836749 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836806 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-startup\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836822 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metallb-excludel2\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836839 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp964\" (UniqueName: \"kubernetes.io/projected/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-kube-api-access-pp964\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836874 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-conf\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836898 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836922 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6rss\" (UniqueName: \"kubernetes.io/projected/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-kube-api-access-d6rss\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836961 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.836989 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-cert\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837014 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837030 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics-certs\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837052 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-sockets\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837085 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7ww2\" (UniqueName: \"kubernetes.io/projected/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-kube-api-access-s7ww2\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837103 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-reloader\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837400 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-reloader\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.837579 4762 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837675 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metallb-excludel2\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.837829 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs podName:3cafb56e-d1ea-48b5-9b1c-691e86cba0d9 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:15.337688313 +0000 UTC m=+1036.811832657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs") pod "speaker-4j4bt" (UID: "3cafb56e-d1ea-48b5-9b1c-691e86cba0d9") : secret "speaker-certs-secret" not found Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.837912 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-conf\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.838090 4762 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.838175 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist podName:3cafb56e-d1ea-48b5-9b1c-691e86cba0d9 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:15.338153097 +0000 UTC m=+1036.812297431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist") pod "speaker-4j4bt" (UID: "3cafb56e-d1ea-48b5-9b1c-691e86cba0d9") : secret "metallb-memberlist" not found Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.838183 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-startup\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.840435 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.841990 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-frr-sockets\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.842455 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-metrics-certs\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.853226 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp964\" (UniqueName: \"kubernetes.io/projected/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-kube-api-access-pp964\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.858136 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7ww2\" (UniqueName: \"kubernetes.io/projected/35f236f0-d58d-4bb2-a6cd-689097c3fbf4-kube-api-access-s7ww2\") pod \"frr-k8s-4qgst\" (UID: \"35f236f0-d58d-4bb2-a6cd-689097c3fbf4\") " pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.934438 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.938305 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6rss\" (UniqueName: \"kubernetes.io/projected/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-kube-api-access-d6rss\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.938514 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-cert\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.938648 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.938707 4762 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 08 00:40:14 crc kubenswrapper[4762]: E0308 00:40:14.938924 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs podName:0b0d938e-fbb6-4ed9-8822-c87f8ce564e3 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:15.438905523 +0000 UTC m=+1036.913049867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs") pod "controller-86ddb6bd46-qgc88" (UID: "0b0d938e-fbb6-4ed9-8822-c87f8ce564e3") : secret "controller-certs-secret" not found Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.944130 4762 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.956735 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-cert\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:14 crc kubenswrapper[4762]: I0308 00:40:14.960347 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6rss\" (UniqueName: \"kubernetes.io/projected/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-kube-api-access-d6rss\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.242211 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.250105 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7a1f5442-2f22-4dff-b59a-0a8233a83b41-cert\") pod \"frr-k8s-webhook-server-7f989f654f-xrnnz\" (UID: \"7a1f5442-2f22-4dff-b59a-0a8233a83b41\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.343789 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:15 crc kubenswrapper[4762]: E0308 00:40:15.343975 4762 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 00:40:15 crc kubenswrapper[4762]: E0308 00:40:15.344335 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist podName:3cafb56e-d1ea-48b5-9b1c-691e86cba0d9 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:16.344313258 +0000 UTC m=+1037.818457612 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist") pod "speaker-4j4bt" (UID: "3cafb56e-d1ea-48b5-9b1c-691e86cba0d9") : secret "metallb-memberlist" not found Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.344272 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.350084 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-metrics-certs\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.446405 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.450216 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0b0d938e-fbb6-4ed9-8822-c87f8ce564e3-metrics-certs\") pod \"controller-86ddb6bd46-qgc88\" (UID: \"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3\") " pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.522658 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:15 crc kubenswrapper[4762]: I0308 00:40:15.699499 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:16 crc kubenswrapper[4762]: I0308 00:40:16.011876 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz"] Mar 08 00:40:16 crc kubenswrapper[4762]: W0308 00:40:16.018741 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a1f5442_2f22_4dff_b59a_0a8233a83b41.slice/crio-67d18a955c7453f4c40d769e8302ab1bc3576b03e907c766c50f00818dbf1ebf WatchSource:0}: Error finding container 67d18a955c7453f4c40d769e8302ab1bc3576b03e907c766c50f00818dbf1ebf: Status 404 returned error can't find the container with id 67d18a955c7453f4c40d769e8302ab1bc3576b03e907c766c50f00818dbf1ebf Mar 08 00:40:16 crc kubenswrapper[4762]: I0308 00:40:16.069249 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" event={"ID":"7a1f5442-2f22-4dff-b59a-0a8233a83b41","Type":"ContainerStarted","Data":"67d18a955c7453f4c40d769e8302ab1bc3576b03e907c766c50f00818dbf1ebf"} Mar 08 00:40:16 crc kubenswrapper[4762]: I0308 00:40:16.070496 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"f7b3231a8010e02674b6faa121b68f90736592569152e95072bb0ecf53d58ab0"} Mar 08 00:40:16 crc kubenswrapper[4762]: I0308 00:40:16.175039 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-qgc88"] Mar 08 00:40:16 crc kubenswrapper[4762]: W0308 00:40:16.179403 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b0d938e_fbb6_4ed9_8822_c87f8ce564e3.slice/crio-b348873a8ae65bc46771a06194abe0825dcfa628b6d5ea934e34dcf23c0d0ba6 WatchSource:0}: Error finding container b348873a8ae65bc46771a06194abe0825dcfa628b6d5ea934e34dcf23c0d0ba6: Status 404 returned error can't find the container with id b348873a8ae65bc46771a06194abe0825dcfa628b6d5ea934e34dcf23c0d0ba6 Mar 08 00:40:16 crc kubenswrapper[4762]: I0308 00:40:16.367886 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:16 crc kubenswrapper[4762]: E0308 00:40:16.368118 4762 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 08 00:40:16 crc kubenswrapper[4762]: E0308 00:40:16.368428 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist podName:3cafb56e-d1ea-48b5-9b1c-691e86cba0d9 nodeName:}" failed. No retries permitted until 2026-03-08 00:40:18.368399004 +0000 UTC m=+1039.842543358 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist") pod "speaker-4j4bt" (UID: "3cafb56e-d1ea-48b5-9b1c-691e86cba0d9") : secret "metallb-memberlist" not found Mar 08 00:40:17 crc kubenswrapper[4762]: I0308 00:40:17.087984 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-qgc88" event={"ID":"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3","Type":"ContainerStarted","Data":"1c96cce6c34c1fbd042eaca84335778a6dcf0249f5bbdf59253bd5900e53494b"} Mar 08 00:40:17 crc kubenswrapper[4762]: I0308 00:40:17.088060 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-qgc88" event={"ID":"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3","Type":"ContainerStarted","Data":"30d68b90d01d1a17750d3c6481626807e06244c36709e6ccee8e35818d28e814"} Mar 08 00:40:17 crc kubenswrapper[4762]: I0308 00:40:17.088076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-qgc88" event={"ID":"0b0d938e-fbb6-4ed9-8822-c87f8ce564e3","Type":"ContainerStarted","Data":"b348873a8ae65bc46771a06194abe0825dcfa628b6d5ea934e34dcf23c0d0ba6"} Mar 08 00:40:17 crc kubenswrapper[4762]: I0308 00:40:17.088406 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:17 crc kubenswrapper[4762]: I0308 00:40:17.139706 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-qgc88" podStartSLOduration=3.139690311 podStartE2EDuration="3.139690311s" podCreationTimestamp="2026-03-08 00:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:40:17.136543597 +0000 UTC m=+1038.610687941" watchObservedRunningTime="2026-03-08 00:40:17.139690311 +0000 UTC m=+1038.613834655" Mar 08 00:40:18 crc kubenswrapper[4762]: I0308 00:40:18.400069 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:18 crc kubenswrapper[4762]: I0308 00:40:18.422718 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/3cafb56e-d1ea-48b5-9b1c-691e86cba0d9-memberlist\") pod \"speaker-4j4bt\" (UID: \"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9\") " pod="metallb-system/speaker-4j4bt" Mar 08 00:40:18 crc kubenswrapper[4762]: I0308 00:40:18.646854 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4j4bt" Mar 08 00:40:18 crc kubenswrapper[4762]: W0308 00:40:18.718688 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cafb56e_d1ea_48b5_9b1c_691e86cba0d9.slice/crio-b1b9c6f76cc7ece4c2668c2c1e807053ded014c23faa4fbb6bbbf56fa3346b5c WatchSource:0}: Error finding container b1b9c6f76cc7ece4c2668c2c1e807053ded014c23faa4fbb6bbbf56fa3346b5c: Status 404 returned error can't find the container with id b1b9c6f76cc7ece4c2668c2c1e807053ded014c23faa4fbb6bbbf56fa3346b5c Mar 08 00:40:19 crc kubenswrapper[4762]: I0308 00:40:19.117416 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4j4bt" event={"ID":"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9","Type":"ContainerStarted","Data":"cb5db0e89275aa1ffd58ba8eff7debeec9669527780fa26451e57521ada330fc"} Mar 08 00:40:19 crc kubenswrapper[4762]: I0308 00:40:19.117743 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4j4bt" event={"ID":"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9","Type":"ContainerStarted","Data":"b1b9c6f76cc7ece4c2668c2c1e807053ded014c23faa4fbb6bbbf56fa3346b5c"} Mar 08 00:40:20 crc kubenswrapper[4762]: I0308 00:40:20.132089 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4j4bt" event={"ID":"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9","Type":"ContainerStarted","Data":"3870ccddd5120bd0083cf198930427459d43d17dd7978d77a56d6a4b267b5254"} Mar 08 00:40:20 crc kubenswrapper[4762]: I0308 00:40:20.132664 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4j4bt" Mar 08 00:40:20 crc kubenswrapper[4762]: I0308 00:40:20.165267 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4j4bt" podStartSLOduration=6.165242898 podStartE2EDuration="6.165242898s" podCreationTimestamp="2026-03-08 00:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:40:20.149090235 +0000 UTC m=+1041.623234569" watchObservedRunningTime="2026-03-08 00:40:20.165242898 +0000 UTC m=+1041.639387232" Mar 08 00:40:23 crc kubenswrapper[4762]: I0308 00:40:23.181991 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" event={"ID":"7a1f5442-2f22-4dff-b59a-0a8233a83b41","Type":"ContainerStarted","Data":"0d392033a642a70fd59abb326568a8384adda73208a797664556c635ffc1ff46"} Mar 08 00:40:23 crc kubenswrapper[4762]: I0308 00:40:23.182354 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:23 crc kubenswrapper[4762]: I0308 00:40:23.201302 4762 generic.go:334] "Generic (PLEG): container finished" podID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerID="42fde74de83f741755d6a2e611c7aeb7c15f1fae73d0fe50a74e7c7eb3707a77" exitCode=0 Mar 08 00:40:23 crc kubenswrapper[4762]: I0308 00:40:23.201358 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerDied","Data":"42fde74de83f741755d6a2e611c7aeb7c15f1fae73d0fe50a74e7c7eb3707a77"} Mar 08 00:40:23 crc kubenswrapper[4762]: I0308 00:40:23.214747 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podStartSLOduration=2.383401472 podStartE2EDuration="9.214723132s" podCreationTimestamp="2026-03-08 00:40:14 +0000 UTC" firstStartedPulling="2026-03-08 00:40:16.023581332 +0000 UTC m=+1037.497725676" lastFinishedPulling="2026-03-08 00:40:22.854902992 +0000 UTC m=+1044.329047336" observedRunningTime="2026-03-08 00:40:23.208120014 +0000 UTC m=+1044.682264358" watchObservedRunningTime="2026-03-08 00:40:23.214723132 +0000 UTC m=+1044.688867476" Mar 08 00:40:24 crc kubenswrapper[4762]: I0308 00:40:24.210234 4762 generic.go:334] "Generic (PLEG): container finished" podID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerID="b03c7687136d61baac297119c28ebbbec902d6b9b2ada3261c1de8b5fc690c2d" exitCode=0 Mar 08 00:40:24 crc kubenswrapper[4762]: I0308 00:40:24.210323 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerDied","Data":"b03c7687136d61baac297119c28ebbbec902d6b9b2ada3261c1de8b5fc690c2d"} Mar 08 00:40:25 crc kubenswrapper[4762]: I0308 00:40:25.226280 4762 generic.go:334] "Generic (PLEG): container finished" podID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerID="85343b5c5e6400b31a7391e835bb0c53252caf6c86302258a6bc0ddbfc055018" exitCode=0 Mar 08 00:40:25 crc kubenswrapper[4762]: I0308 00:40:25.226341 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerDied","Data":"85343b5c5e6400b31a7391e835bb0c53252caf6c86302258a6bc0ddbfc055018"} Mar 08 00:40:26 crc kubenswrapper[4762]: I0308 00:40:26.247668 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"93beb1c860468748f7ebc12e3c05047c6457e4b16e140267aecb4c7e48531952"} Mar 08 00:40:26 crc kubenswrapper[4762]: I0308 00:40:26.248077 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"9a8479f0ae20870037d1540b8571dd7acb433baf50676d7c335444bbc5fd39c8"} Mar 08 00:40:26 crc kubenswrapper[4762]: I0308 00:40:26.248091 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"6f2686eb62abadc020f06b33173810848905c7100e4362b7fc8b6e2ff1d35a2e"} Mar 08 00:40:26 crc kubenswrapper[4762]: I0308 00:40:26.248102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"01233fc768238da4d221288098b6de0ccdbb6b9b8f604c4f036df7b0d4542736"} Mar 08 00:40:26 crc kubenswrapper[4762]: I0308 00:40:26.248116 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"8078bc31cae92050783ac2dd468d11b53c5b3670e54aad31fe27ca96d77a0828"} Mar 08 00:40:27 crc kubenswrapper[4762]: I0308 00:40:27.260719 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"1e0dbd10baa44e844352bfbfc42648cf0f275a9d1100969acd28e92d1e5898dc"} Mar 08 00:40:27 crc kubenswrapper[4762]: I0308 00:40:27.262695 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:29 crc kubenswrapper[4762]: I0308 00:40:29.935080 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:29 crc kubenswrapper[4762]: I0308 00:40:29.997507 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:30 crc kubenswrapper[4762]: I0308 00:40:30.035746 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-4qgst" podStartSLOduration=8.356569925 podStartE2EDuration="16.035727264s" podCreationTimestamp="2026-03-08 00:40:14 +0000 UTC" firstStartedPulling="2026-03-08 00:40:15.155414164 +0000 UTC m=+1036.629558518" lastFinishedPulling="2026-03-08 00:40:22.834571513 +0000 UTC m=+1044.308715857" observedRunningTime="2026-03-08 00:40:27.303197258 +0000 UTC m=+1048.777341642" watchObservedRunningTime="2026-03-08 00:40:30.035727264 +0000 UTC m=+1051.509871618" Mar 08 00:40:35 crc kubenswrapper[4762]: I0308 00:40:35.529796 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 00:40:35 crc kubenswrapper[4762]: I0308 00:40:35.708129 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-qgc88" Mar 08 00:40:38 crc kubenswrapper[4762]: I0308 00:40:38.651895 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4j4bt" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.695478 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.698291 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.701490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-s2k65" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.702279 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.706011 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.716829 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.757371 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpjbk\" (UniqueName: \"kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk\") pod \"openstack-operator-index-27m2d\" (UID: \"99ec0edb-c1af-44c1-98b1-a559f67de815\") " pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.859164 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpjbk\" (UniqueName: \"kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk\") pod \"openstack-operator-index-27m2d\" (UID: \"99ec0edb-c1af-44c1-98b1-a559f67de815\") " pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:41 crc kubenswrapper[4762]: I0308 00:40:41.887273 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpjbk\" (UniqueName: \"kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk\") pod \"openstack-operator-index-27m2d\" (UID: \"99ec0edb-c1af-44c1-98b1-a559f67de815\") " pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:42 crc kubenswrapper[4762]: I0308 00:40:42.017078 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:42 crc kubenswrapper[4762]: I0308 00:40:42.551119 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:43 crc kubenswrapper[4762]: I0308 00:40:43.406901 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-27m2d" event={"ID":"99ec0edb-c1af-44c1-98b1-a559f67de815","Type":"ContainerStarted","Data":"b82d3e7cff505ccfa3dda204de7c4b26c9ce504d4f5a5bf4f5937a61fdd4199b"} Mar 08 00:40:44 crc kubenswrapper[4762]: I0308 00:40:44.940217 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4qgst" Mar 08 00:40:45 crc kubenswrapper[4762]: I0308 00:40:45.848002 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.437684 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-27m2d" event={"ID":"99ec0edb-c1af-44c1-98b1-a559f67de815","Type":"ContainerStarted","Data":"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d"} Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.457923 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xtl98"] Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.459754 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.467707 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-27m2d" podStartSLOduration=2.695700203 podStartE2EDuration="5.467671971s" podCreationTimestamp="2026-03-08 00:40:41 +0000 UTC" firstStartedPulling="2026-03-08 00:40:42.560897694 +0000 UTC m=+1064.035042038" lastFinishedPulling="2026-03-08 00:40:45.332869462 +0000 UTC m=+1066.807013806" observedRunningTime="2026-03-08 00:40:46.461582419 +0000 UTC m=+1067.935726793" watchObservedRunningTime="2026-03-08 00:40:46.467671971 +0000 UTC m=+1067.941816355" Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.492986 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xtl98"] Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.558599 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmsnm\" (UniqueName: \"kubernetes.io/projected/0707d234-c53e-4212-b289-65a10c0b1502-kube-api-access-gmsnm\") pod \"openstack-operator-index-xtl98\" (UID: \"0707d234-c53e-4212-b289-65a10c0b1502\") " pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.659624 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmsnm\" (UniqueName: \"kubernetes.io/projected/0707d234-c53e-4212-b289-65a10c0b1502-kube-api-access-gmsnm\") pod \"openstack-operator-index-xtl98\" (UID: \"0707d234-c53e-4212-b289-65a10c0b1502\") " pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.680710 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmsnm\" (UniqueName: \"kubernetes.io/projected/0707d234-c53e-4212-b289-65a10c0b1502-kube-api-access-gmsnm\") pod \"openstack-operator-index-xtl98\" (UID: \"0707d234-c53e-4212-b289-65a10c0b1502\") " pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:46 crc kubenswrapper[4762]: I0308 00:40:46.795255 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:47 crc kubenswrapper[4762]: I0308 00:40:47.301235 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xtl98"] Mar 08 00:40:47 crc kubenswrapper[4762]: W0308 00:40:47.315014 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0707d234_c53e_4212_b289_65a10c0b1502.slice/crio-039e5277fcd1fa6bbb664d93544780febfce41b653a045485bc9e9bd15189ebc WatchSource:0}: Error finding container 039e5277fcd1fa6bbb664d93544780febfce41b653a045485bc9e9bd15189ebc: Status 404 returned error can't find the container with id 039e5277fcd1fa6bbb664d93544780febfce41b653a045485bc9e9bd15189ebc Mar 08 00:40:47 crc kubenswrapper[4762]: I0308 00:40:47.464045 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-27m2d" podUID="99ec0edb-c1af-44c1-98b1-a559f67de815" containerName="registry-server" containerID="cri-o://b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d" gracePeriod=2 Mar 08 00:40:47 crc kubenswrapper[4762]: I0308 00:40:47.464369 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xtl98" event={"ID":"0707d234-c53e-4212-b289-65a10c0b1502","Type":"ContainerStarted","Data":"039e5277fcd1fa6bbb664d93544780febfce41b653a045485bc9e9bd15189ebc"} Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.001279 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.084410 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpjbk\" (UniqueName: \"kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk\") pod \"99ec0edb-c1af-44c1-98b1-a559f67de815\" (UID: \"99ec0edb-c1af-44c1-98b1-a559f67de815\") " Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.092598 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk" (OuterVolumeSpecName: "kube-api-access-gpjbk") pod "99ec0edb-c1af-44c1-98b1-a559f67de815" (UID: "99ec0edb-c1af-44c1-98b1-a559f67de815"). InnerVolumeSpecName "kube-api-access-gpjbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.186284 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpjbk\" (UniqueName: \"kubernetes.io/projected/99ec0edb-c1af-44c1-98b1-a559f67de815-kube-api-access-gpjbk\") on node \"crc\" DevicePath \"\"" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.476795 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-27m2d" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.476698 4762 generic.go:334] "Generic (PLEG): container finished" podID="99ec0edb-c1af-44c1-98b1-a559f67de815" containerID="b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d" exitCode=0 Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.476788 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-27m2d" event={"ID":"99ec0edb-c1af-44c1-98b1-a559f67de815","Type":"ContainerDied","Data":"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d"} Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.476993 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-27m2d" event={"ID":"99ec0edb-c1af-44c1-98b1-a559f67de815","Type":"ContainerDied","Data":"b82d3e7cff505ccfa3dda204de7c4b26c9ce504d4f5a5bf4f5937a61fdd4199b"} Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.477035 4762 scope.go:117] "RemoveContainer" containerID="b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.481445 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xtl98" event={"ID":"0707d234-c53e-4212-b289-65a10c0b1502","Type":"ContainerStarted","Data":"6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e"} Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.504453 4762 scope.go:117] "RemoveContainer" containerID="b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d" Mar 08 00:40:48 crc kubenswrapper[4762]: E0308 00:40:48.505435 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d\": container with ID starting with b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d not found: ID does not exist" containerID="b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.505488 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d"} err="failed to get container status \"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d\": rpc error: code = NotFound desc = could not find container \"b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d\": container with ID starting with b0deaf68540ec6d8add72ec5a05c810105bfe973e94364c2d690fc34e524545d not found: ID does not exist" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.513533 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xtl98" podStartSLOduration=2.437971909 podStartE2EDuration="2.513514822s" podCreationTimestamp="2026-03-08 00:40:46 +0000 UTC" firstStartedPulling="2026-03-08 00:40:47.329038625 +0000 UTC m=+1068.803182979" lastFinishedPulling="2026-03-08 00:40:47.404581538 +0000 UTC m=+1068.878725892" observedRunningTime="2026-03-08 00:40:48.511922154 +0000 UTC m=+1069.986066518" watchObservedRunningTime="2026-03-08 00:40:48.513514822 +0000 UTC m=+1069.987659186" Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.541568 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:48 crc kubenswrapper[4762]: I0308 00:40:48.546896 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-27m2d"] Mar 08 00:40:49 crc kubenswrapper[4762]: I0308 00:40:49.275095 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ec0edb-c1af-44c1-98b1-a559f67de815" path="/var/lib/kubelet/pods/99ec0edb-c1af-44c1-98b1-a559f67de815/volumes" Mar 08 00:40:56 crc kubenswrapper[4762]: I0308 00:40:56.796176 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:56 crc kubenswrapper[4762]: I0308 00:40:56.796731 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:56 crc kubenswrapper[4762]: I0308 00:40:56.833806 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:40:57 crc kubenswrapper[4762]: I0308 00:40:57.608943 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 00:41:00 crc kubenswrapper[4762]: I0308 00:41:00.357886 4762 scope.go:117] "RemoveContainer" containerID="d3dd37e3275d998a5e68707a602842d29eebb5b577259e3d232f3dfc129f68b8" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.377491 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm"] Mar 08 00:41:03 crc kubenswrapper[4762]: E0308 00:41:03.378416 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ec0edb-c1af-44c1-98b1-a559f67de815" containerName="registry-server" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.378438 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ec0edb-c1af-44c1-98b1-a559f67de815" containerName="registry-server" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.378667 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ec0edb-c1af-44c1-98b1-a559f67de815" containerName="registry-server" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.380349 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.383893 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-fmd6c" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.405644 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm"] Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.436505 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.437104 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rbbd\" (UniqueName: \"kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.437150 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.539003 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rbbd\" (UniqueName: \"kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.539056 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.539113 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.539753 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.540065 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.561753 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rbbd\" (UniqueName: \"kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd\") pod \"3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:03 crc kubenswrapper[4762]: I0308 00:41:03.710278 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:04 crc kubenswrapper[4762]: I0308 00:41:04.277454 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm"] Mar 08 00:41:04 crc kubenswrapper[4762]: I0308 00:41:04.629141 4762 generic.go:334] "Generic (PLEG): container finished" podID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerID="6365e07d334ea3587fe1fb68ab8e5cd49c266d8dcc2080891b9b480642c38f69" exitCode=0 Mar 08 00:41:04 crc kubenswrapper[4762]: I0308 00:41:04.629213 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" event={"ID":"31b564be-f28f-428b-a65e-fd3521c7b9f3","Type":"ContainerDied","Data":"6365e07d334ea3587fe1fb68ab8e5cd49c266d8dcc2080891b9b480642c38f69"} Mar 08 00:41:04 crc kubenswrapper[4762]: I0308 00:41:04.629543 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" event={"ID":"31b564be-f28f-428b-a65e-fd3521c7b9f3","Type":"ContainerStarted","Data":"f975f4923ccab75e73c71991f362c5581dc72b05b95b2059847b4c7e671399d2"} Mar 08 00:41:05 crc kubenswrapper[4762]: I0308 00:41:05.640980 4762 generic.go:334] "Generic (PLEG): container finished" podID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerID="5c917241c3e023bd3102becd843e75bb5166aa2659994b8006d485e5e44a7006" exitCode=0 Mar 08 00:41:05 crc kubenswrapper[4762]: I0308 00:41:05.641059 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" event={"ID":"31b564be-f28f-428b-a65e-fd3521c7b9f3","Type":"ContainerDied","Data":"5c917241c3e023bd3102becd843e75bb5166aa2659994b8006d485e5e44a7006"} Mar 08 00:41:06 crc kubenswrapper[4762]: I0308 00:41:06.649825 4762 generic.go:334] "Generic (PLEG): container finished" podID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerID="69e0610dbf276a472aeb7a06d467908ea956bec349976d7c5bff0772356c1eab" exitCode=0 Mar 08 00:41:06 crc kubenswrapper[4762]: I0308 00:41:06.649952 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" event={"ID":"31b564be-f28f-428b-a65e-fd3521c7b9f3","Type":"ContainerDied","Data":"69e0610dbf276a472aeb7a06d467908ea956bec349976d7c5bff0772356c1eab"} Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.117204 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.218456 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle\") pod \"31b564be-f28f-428b-a65e-fd3521c7b9f3\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.218616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rbbd\" (UniqueName: \"kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd\") pod \"31b564be-f28f-428b-a65e-fd3521c7b9f3\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.218685 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util\") pod \"31b564be-f28f-428b-a65e-fd3521c7b9f3\" (UID: \"31b564be-f28f-428b-a65e-fd3521c7b9f3\") " Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.219612 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle" (OuterVolumeSpecName: "bundle") pod "31b564be-f28f-428b-a65e-fd3521c7b9f3" (UID: "31b564be-f28f-428b-a65e-fd3521c7b9f3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.234173 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd" (OuterVolumeSpecName: "kube-api-access-5rbbd") pod "31b564be-f28f-428b-a65e-fd3521c7b9f3" (UID: "31b564be-f28f-428b-a65e-fd3521c7b9f3"). InnerVolumeSpecName "kube-api-access-5rbbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.250659 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util" (OuterVolumeSpecName: "util") pod "31b564be-f28f-428b-a65e-fd3521c7b9f3" (UID: "31b564be-f28f-428b-a65e-fd3521c7b9f3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.320874 4762 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-util\") on node \"crc\" DevicePath \"\"" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.320928 4762 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/31b564be-f28f-428b-a65e-fd3521c7b9f3-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.320948 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rbbd\" (UniqueName: \"kubernetes.io/projected/31b564be-f28f-428b-a65e-fd3521c7b9f3-kube-api-access-5rbbd\") on node \"crc\" DevicePath \"\"" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.675299 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" event={"ID":"31b564be-f28f-428b-a65e-fd3521c7b9f3","Type":"ContainerDied","Data":"f975f4923ccab75e73c71991f362c5581dc72b05b95b2059847b4c7e671399d2"} Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.675667 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f975f4923ccab75e73c71991f362c5581dc72b05b95b2059847b4c7e671399d2" Mar 08 00:41:08 crc kubenswrapper[4762]: I0308 00:41:08.675360 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm" Mar 08 00:41:12 crc kubenswrapper[4762]: I0308 00:41:12.851435 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:41:12 crc kubenswrapper[4762]: I0308 00:41:12.851844 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.260337 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67"] Mar 08 00:41:32 crc kubenswrapper[4762]: E0308 00:41:32.261392 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="pull" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.261410 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="pull" Mar 08 00:41:32 crc kubenswrapper[4762]: E0308 00:41:32.261430 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="util" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.261437 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="util" Mar 08 00:41:32 crc kubenswrapper[4762]: E0308 00:41:32.261460 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="extract" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.261467 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="extract" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.261654 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="31b564be-f28f-428b-a65e-fd3521c7b9f3" containerName="extract" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.262386 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.268353 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-2vh5m" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.308190 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67"] Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.387632 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwprn\" (UniqueName: \"kubernetes.io/projected/2032bfa9-398b-4802-84bc-272c70f31afb-kube-api-access-qwprn\") pod \"openstack-operator-controller-init-5b4fc57fb8-bgr67\" (UID: \"2032bfa9-398b-4802-84bc-272c70f31afb\") " pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.489818 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwprn\" (UniqueName: \"kubernetes.io/projected/2032bfa9-398b-4802-84bc-272c70f31afb-kube-api-access-qwprn\") pod \"openstack-operator-controller-init-5b4fc57fb8-bgr67\" (UID: \"2032bfa9-398b-4802-84bc-272c70f31afb\") " pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.511231 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwprn\" (UniqueName: \"kubernetes.io/projected/2032bfa9-398b-4802-84bc-272c70f31afb-kube-api-access-qwprn\") pod \"openstack-operator-controller-init-5b4fc57fb8-bgr67\" (UID: \"2032bfa9-398b-4802-84bc-272c70f31afb\") " pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.585481 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:32 crc kubenswrapper[4762]: I0308 00:41:32.905403 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67"] Mar 08 00:41:33 crc kubenswrapper[4762]: I0308 00:41:33.904014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" event={"ID":"2032bfa9-398b-4802-84bc-272c70f31afb","Type":"ContainerStarted","Data":"1f93bce90d23254908b85ea38e91d422fda78abd73983778ec6dc117eb1652e1"} Mar 08 00:41:37 crc kubenswrapper[4762]: I0308 00:41:37.944791 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" event={"ID":"2032bfa9-398b-4802-84bc-272c70f31afb","Type":"ContainerStarted","Data":"61f209625b5696ff4e2b1c69f6a95d86fd34e1db9bf5cc9b8ece7b4746f58686"} Mar 08 00:41:37 crc kubenswrapper[4762]: I0308 00:41:37.945678 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:38 crc kubenswrapper[4762]: I0308 00:41:38.009101 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" podStartSLOduration=1.9634671209999999 podStartE2EDuration="6.009066332s" podCreationTimestamp="2026-03-08 00:41:32 +0000 UTC" firstStartedPulling="2026-03-08 00:41:32.914859789 +0000 UTC m=+1114.389004133" lastFinishedPulling="2026-03-08 00:41:36.960459 +0000 UTC m=+1118.434603344" observedRunningTime="2026-03-08 00:41:37.992832593 +0000 UTC m=+1119.466976977" watchObservedRunningTime="2026-03-08 00:41:38.009066332 +0000 UTC m=+1119.483210716" Mar 08 00:41:42 crc kubenswrapper[4762]: I0308 00:41:42.590051 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 00:41:42 crc kubenswrapper[4762]: I0308 00:41:42.852480 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:41:42 crc kubenswrapper[4762]: I0308 00:41:42.853410 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.138093 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548842-2fzrv"] Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.140570 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.144312 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.144538 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.144695 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.154991 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548842-2fzrv"] Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.227582 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2l6w\" (UniqueName: \"kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w\") pod \"auto-csr-approver-29548842-2fzrv\" (UID: \"88d54f80-7455-4a4b-8d9e-b5e24de88ed5\") " pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.332540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2l6w\" (UniqueName: \"kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w\") pod \"auto-csr-approver-29548842-2fzrv\" (UID: \"88d54f80-7455-4a4b-8d9e-b5e24de88ed5\") " pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.358622 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2l6w\" (UniqueName: \"kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w\") pod \"auto-csr-approver-29548842-2fzrv\" (UID: \"88d54f80-7455-4a4b-8d9e-b5e24de88ed5\") " pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.466689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:00 crc kubenswrapper[4762]: I0308 00:42:00.895076 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548842-2fzrv"] Mar 08 00:42:01 crc kubenswrapper[4762]: I0308 00:42:01.164161 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" event={"ID":"88d54f80-7455-4a4b-8d9e-b5e24de88ed5","Type":"ContainerStarted","Data":"bcbbde3c7e239157dc751c2645bf940480a29fc3ef0d42d7ff91a72394bd36df"} Mar 08 00:42:03 crc kubenswrapper[4762]: I0308 00:42:03.178613 4762 generic.go:334] "Generic (PLEG): container finished" podID="88d54f80-7455-4a4b-8d9e-b5e24de88ed5" containerID="8d668b72a09e2c58c81f0e42f8e5a2eef83c846a75599e8a85d6116c3d164ebe" exitCode=0 Mar 08 00:42:03 crc kubenswrapper[4762]: I0308 00:42:03.178827 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" event={"ID":"88d54f80-7455-4a4b-8d9e-b5e24de88ed5","Type":"ContainerDied","Data":"8d668b72a09e2c58c81f0e42f8e5a2eef83c846a75599e8a85d6116c3d164ebe"} Mar 08 00:42:03 crc kubenswrapper[4762]: I0308 00:42:03.989408 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz"] Mar 08 00:42:03 crc kubenswrapper[4762]: I0308 00:42:03.990747 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:03 crc kubenswrapper[4762]: I0308 00:42:03.992583 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vr6qm" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.006260 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.031812 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.033228 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.035475 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j5kb7" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.040201 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.041584 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.047500 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-dqjpg" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.050843 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.062248 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.063254 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.066526 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9b8z8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.081926 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.090608 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.094514 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlpzb\" (UniqueName: \"kubernetes.io/projected/20b130fa-d7f7-441a-bd96-0d5858f1ece1-kube-api-access-rlpzb\") pod \"barbican-operator-controller-manager-6db6876945-2nfcz\" (UID: \"20b130fa-d7f7-441a-bd96-0d5858f1ece1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.094578 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvvpx\" (UniqueName: \"kubernetes.io/projected/60096a41-cef5-4818-a549-96b51b04cd8f-kube-api-access-cvvpx\") pod \"designate-operator-controller-manager-5d87c9d997-8r57n\" (UID: \"60096a41-cef5-4818-a549-96b51b04cd8f\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.094629 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm2b6\" (UniqueName: \"kubernetes.io/projected/625fe5b5-181a-47db-8656-00c8f5fc045f-kube-api-access-rm2b6\") pod \"glance-operator-controller-manager-64db6967f8-c6prb\" (UID: \"625fe5b5-181a-47db-8656-00c8f5fc045f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.094653 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phmvs\" (UniqueName: \"kubernetes.io/projected/e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd-kube-api-access-phmvs\") pod \"cinder-operator-controller-manager-55d77d7b5c-6dwmz\" (UID: \"e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.110030 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.111109 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.114110 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-7cd74" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.125901 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.126843 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.135751 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4hdw7" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.137735 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.148140 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.169951 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.174035 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.177273 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.181383 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-gm2ds" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244520 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlpzb\" (UniqueName: \"kubernetes.io/projected/20b130fa-d7f7-441a-bd96-0d5858f1ece1-kube-api-access-rlpzb\") pod \"barbican-operator-controller-manager-6db6876945-2nfcz\" (UID: \"20b130fa-d7f7-441a-bd96-0d5858f1ece1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244612 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvvpx\" (UniqueName: \"kubernetes.io/projected/60096a41-cef5-4818-a549-96b51b04cd8f-kube-api-access-cvvpx\") pod \"designate-operator-controller-manager-5d87c9d997-8r57n\" (UID: \"60096a41-cef5-4818-a549-96b51b04cd8f\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244654 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjv5p\" (UniqueName: \"kubernetes.io/projected/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-kube-api-access-fjv5p\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244680 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm2b6\" (UniqueName: \"kubernetes.io/projected/625fe5b5-181a-47db-8656-00c8f5fc045f-kube-api-access-rm2b6\") pod \"glance-operator-controller-manager-64db6967f8-c6prb\" (UID: \"625fe5b5-181a-47db-8656-00c8f5fc045f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244700 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7h7v\" (UniqueName: \"kubernetes.io/projected/2352d4f2-aadc-4ad7-806e-9324d3be5116-kube-api-access-x7h7v\") pod \"heat-operator-controller-manager-cf99c678f-vmb9b\" (UID: \"2352d4f2-aadc-4ad7-806e-9324d3be5116\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244732 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phmvs\" (UniqueName: \"kubernetes.io/projected/e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd-kube-api-access-phmvs\") pod \"cinder-operator-controller-manager-55d77d7b5c-6dwmz\" (UID: \"e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244792 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.244835 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ks2\" (UniqueName: \"kubernetes.io/projected/d5f0be01-26e9-4c4e-8122-61659529e505-kube-api-access-d8ks2\") pod \"horizon-operator-controller-manager-78bc7f9bd9-m7h5s\" (UID: \"d5f0be01-26e9-4c4e-8122-61659529e505\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.256855 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.269915 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvvpx\" (UniqueName: \"kubernetes.io/projected/60096a41-cef5-4818-a549-96b51b04cd8f-kube-api-access-cvvpx\") pod \"designate-operator-controller-manager-5d87c9d997-8r57n\" (UID: \"60096a41-cef5-4818-a549-96b51b04cd8f\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.286040 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phmvs\" (UniqueName: \"kubernetes.io/projected/e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd-kube-api-access-phmvs\") pod \"cinder-operator-controller-manager-55d77d7b5c-6dwmz\" (UID: \"e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.286110 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.286109 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlpzb\" (UniqueName: \"kubernetes.io/projected/20b130fa-d7f7-441a-bd96-0d5858f1ece1-kube-api-access-rlpzb\") pod \"barbican-operator-controller-manager-6db6876945-2nfcz\" (UID: \"20b130fa-d7f7-441a-bd96-0d5858f1ece1\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.291732 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm2b6\" (UniqueName: \"kubernetes.io/projected/625fe5b5-181a-47db-8656-00c8f5fc045f-kube-api-access-rm2b6\") pod \"glance-operator-controller-manager-64db6967f8-c6prb\" (UID: \"625fe5b5-181a-47db-8656-00c8f5fc045f\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.299161 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.299307 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.309485 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.313658 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9cmkr" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.329754 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.331310 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.338197 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-2nprm" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.344192 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-m26xv"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.345219 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.346944 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-zzpdq" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.362973 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.363131 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8ks2\" (UniqueName: \"kubernetes.io/projected/d5f0be01-26e9-4c4e-8122-61659529e505-kube-api-access-d8ks2\") pod \"horizon-operator-controller-manager-78bc7f9bd9-m7h5s\" (UID: \"d5f0be01-26e9-4c4e-8122-61659529e505\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.363377 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6j8\" (UniqueName: \"kubernetes.io/projected/5edc85d7-4f23-4c94-a998-17f8402c37d3-kube-api-access-mh6j8\") pod \"ironic-operator-controller-manager-545456dc4-pf8l2\" (UID: \"5edc85d7-4f23-4c94-a998-17f8402c37d3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.363483 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjv5p\" (UniqueName: \"kubernetes.io/projected/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-kube-api-access-fjv5p\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.363525 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7h7v\" (UniqueName: \"kubernetes.io/projected/2352d4f2-aadc-4ad7-806e-9324d3be5116-kube-api-access-x7h7v\") pod \"heat-operator-controller-manager-cf99c678f-vmb9b\" (UID: \"2352d4f2-aadc-4ad7-806e-9324d3be5116\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.369470 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.370474 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww"] Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.371591 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.371649 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:04.87162741 +0000 UTC m=+1146.345771754 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.378505 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.385244 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.404755 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7h7v\" (UniqueName: \"kubernetes.io/projected/2352d4f2-aadc-4ad7-806e-9324d3be5116-kube-api-access-x7h7v\") pod \"heat-operator-controller-manager-cf99c678f-vmb9b\" (UID: \"2352d4f2-aadc-4ad7-806e-9324d3be5116\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.405845 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjv5p\" (UniqueName: \"kubernetes.io/projected/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-kube-api-access-fjv5p\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.409414 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8ks2\" (UniqueName: \"kubernetes.io/projected/d5f0be01-26e9-4c4e-8122-61659529e505-kube-api-access-d8ks2\") pod \"horizon-operator-controller-manager-78bc7f9bd9-m7h5s\" (UID: \"d5f0be01-26e9-4c4e-8122-61659529e505\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.412222 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-m26xv"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.420621 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.421674 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.424049 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gc42r" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.433783 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.434359 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.450350 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.452475 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.458014 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-w9bqk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.469485 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.469593 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.471676 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.472130 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6j8\" (UniqueName: \"kubernetes.io/projected/5edc85d7-4f23-4c94-a998-17f8402c37d3-kube-api-access-mh6j8\") pod \"ironic-operator-controller-manager-545456dc4-pf8l2\" (UID: \"5edc85d7-4f23-4c94-a998-17f8402c37d3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.472207 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/7f6a4543-a300-4393-93e0-fcfeae3ccd61-kube-api-access-srmsv\") pod \"manila-operator-controller-manager-67d996989d-m26xv\" (UID: \"7f6a4543-a300-4393-93e0-fcfeae3ccd61\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.472359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjjl\" (UniqueName: \"kubernetes.io/projected/ead6b665-cd0f-475a-a71b-33fd36246484-kube-api-access-bmjjl\") pod \"keystone-operator-controller-manager-7c789f89c6-hwdww\" (UID: \"ead6b665-cd0f-475a-a71b-33fd36246484\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.474988 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-phpff" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.500248 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.501517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6j8\" (UniqueName: \"kubernetes.io/projected/5edc85d7-4f23-4c94-a998-17f8402c37d3-kube-api-access-mh6j8\") pod \"ironic-operator-controller-manager-545456dc4-pf8l2\" (UID: \"5edc85d7-4f23-4c94-a998-17f8402c37d3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.523527 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.549720 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.551251 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.554353 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-v7bnb" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.574919 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/7f6a4543-a300-4393-93e0-fcfeae3ccd61-kube-api-access-srmsv\") pod \"manila-operator-controller-manager-67d996989d-m26xv\" (UID: \"7f6a4543-a300-4393-93e0-fcfeae3ccd61\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.575023 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmjjl\" (UniqueName: \"kubernetes.io/projected/ead6b665-cd0f-475a-a71b-33fd36246484-kube-api-access-bmjjl\") pod \"keystone-operator-controller-manager-7c789f89c6-hwdww\" (UID: \"ead6b665-cd0f-475a-a71b-33fd36246484\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.575066 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzzgc\" (UniqueName: \"kubernetes.io/projected/ac0364ec-ad05-431d-b2f4-c92353f15f4c-kube-api-access-gzzgc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-jvlps\" (UID: \"ac0364ec-ad05-431d-b2f4-c92353f15f4c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.575096 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn54q\" (UniqueName: \"kubernetes.io/projected/da66283d-dd88-4e6a-a4ad-496064bc8a78-kube-api-access-hn54q\") pod \"neutron-operator-controller-manager-54688575f-lk8mx\" (UID: \"da66283d-dd88-4e6a-a4ad-496064bc8a78\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.575137 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfpfl\" (UniqueName: \"kubernetes.io/projected/05d1f89d-b2b2-48ff-8555-e9f68ac3300a-kube-api-access-xfpfl\") pod \"nova-operator-controller-manager-74b6b5dc96-xmkb6\" (UID: \"05d1f89d-b2b2-48ff-8555-e9f68ac3300a\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.578444 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.583126 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.584208 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.588984 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.589203 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-dvlqj" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.594210 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.600008 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.600160 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.601994 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-jbvjq" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.602440 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.617966 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srmsv\" (UniqueName: \"kubernetes.io/projected/7f6a4543-a300-4393-93e0-fcfeae3ccd61-kube-api-access-srmsv\") pod \"manila-operator-controller-manager-67d996989d-m26xv\" (UID: \"7f6a4543-a300-4393-93e0-fcfeae3ccd61\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.619048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmjjl\" (UniqueName: \"kubernetes.io/projected/ead6b665-cd0f-475a-a71b-33fd36246484-kube-api-access-bmjjl\") pod \"keystone-operator-controller-manager-7c789f89c6-hwdww\" (UID: \"ead6b665-cd0f-475a-a71b-33fd36246484\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.647799 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.652472 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.656271 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ft8mt" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676072 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzzgc\" (UniqueName: \"kubernetes.io/projected/ac0364ec-ad05-431d-b2f4-c92353f15f4c-kube-api-access-gzzgc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-jvlps\" (UID: \"ac0364ec-ad05-431d-b2f4-c92353f15f4c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676113 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn54q\" (UniqueName: \"kubernetes.io/projected/da66283d-dd88-4e6a-a4ad-496064bc8a78-kube-api-access-hn54q\") pod \"neutron-operator-controller-manager-54688575f-lk8mx\" (UID: \"da66283d-dd88-4e6a-a4ad-496064bc8a78\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676142 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl22c\" (UniqueName: \"kubernetes.io/projected/6b30a18d-93d3-48de-9b32-7c2326e04220-kube-api-access-dl22c\") pod \"octavia-operator-controller-manager-5d86c7ddb7-847b8\" (UID: \"6b30a18d-93d3-48de-9b32-7c2326e04220\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676163 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfpfl\" (UniqueName: \"kubernetes.io/projected/05d1f89d-b2b2-48ff-8555-e9f68ac3300a-kube-api-access-xfpfl\") pod \"nova-operator-controller-manager-74b6b5dc96-xmkb6\" (UID: \"05d1f89d-b2b2-48ff-8555-e9f68ac3300a\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676218 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mlt\" (UniqueName: \"kubernetes.io/projected/8fc55d76-cb72-4ac9-b132-24b997e298a3-kube-api-access-x2mlt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676240 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.676285 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9855w\" (UniqueName: \"kubernetes.io/projected/1906010e-f253-4d33-8e97-96d8860c3ff6-kube-api-access-9855w\") pod \"ovn-operator-controller-manager-75684d597f-dh78h\" (UID: \"1906010e-f253-4d33-8e97-96d8860c3ff6\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.694439 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.707868 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.709315 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.711714 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfpfl\" (UniqueName: \"kubernetes.io/projected/05d1f89d-b2b2-48ff-8555-e9f68ac3300a-kube-api-access-xfpfl\") pod \"nova-operator-controller-manager-74b6b5dc96-xmkb6\" (UID: \"05d1f89d-b2b2-48ff-8555-e9f68ac3300a\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.712743 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzzgc\" (UniqueName: \"kubernetes.io/projected/ac0364ec-ad05-431d-b2f4-c92353f15f4c-kube-api-access-gzzgc\") pod \"mariadb-operator-controller-manager-7b6bfb6475-jvlps\" (UID: \"ac0364ec-ad05-431d-b2f4-c92353f15f4c\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.725827 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.726718 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn54q\" (UniqueName: \"kubernetes.io/projected/da66283d-dd88-4e6a-a4ad-496064bc8a78-kube-api-access-hn54q\") pod \"neutron-operator-controller-manager-54688575f-lk8mx\" (UID: \"da66283d-dd88-4e6a-a4ad-496064bc8a78\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.730115 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-dv77q" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.762251 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.765073 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788202 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788264 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq94l\" (UniqueName: \"kubernetes.io/projected/3216ee69-307e-4151-889b-6e71f6e8c47a-kube-api-access-cq94l\") pod \"swift-operator-controller-manager-9b9ff9f4d-k88bh\" (UID: \"3216ee69-307e-4151-889b-6e71f6e8c47a\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788332 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9855w\" (UniqueName: \"kubernetes.io/projected/1906010e-f253-4d33-8e97-96d8860c3ff6-kube-api-access-9855w\") pod \"ovn-operator-controller-manager-75684d597f-dh78h\" (UID: \"1906010e-f253-4d33-8e97-96d8860c3ff6\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788386 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzjh\" (UniqueName: \"kubernetes.io/projected/d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3-kube-api-access-vkzjh\") pod \"placement-operator-controller-manager-648564c9fc-ptbxt\" (UID: \"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788432 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl22c\" (UniqueName: \"kubernetes.io/projected/6b30a18d-93d3-48de-9b32-7c2326e04220-kube-api-access-dl22c\") pod \"octavia-operator-controller-manager-5d86c7ddb7-847b8\" (UID: \"6b30a18d-93d3-48de-9b32-7c2326e04220\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.788493 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mlt\" (UniqueName: \"kubernetes.io/projected/8fc55d76-cb72-4ac9-b132-24b997e298a3-kube-api-access-x2mlt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.802696 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.804388 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.804467 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:05.304433637 +0000 UTC m=+1146.778577981 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.811453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mlt\" (UniqueName: \"kubernetes.io/projected/8fc55d76-cb72-4ac9-b132-24b997e298a3-kube-api-access-x2mlt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.816289 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.817542 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.819544 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-k7hnq" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.848223 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.853242 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.856150 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.862075 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.879057 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9855w\" (UniqueName: \"kubernetes.io/projected/1906010e-f253-4d33-8e97-96d8860c3ff6-kube-api-access-9855w\") pod \"ovn-operator-controller-manager-75684d597f-dh78h\" (UID: \"1906010e-f253-4d33-8e97-96d8860c3ff6\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.886432 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl22c\" (UniqueName: \"kubernetes.io/projected/6b30a18d-93d3-48de-9b32-7c2326e04220-kube-api-access-dl22c\") pod \"octavia-operator-controller-manager-5d86c7ddb7-847b8\" (UID: \"6b30a18d-93d3-48de-9b32-7c2326e04220\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.887065 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.890218 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq94l\" (UniqueName: \"kubernetes.io/projected/3216ee69-307e-4151-889b-6e71f6e8c47a-kube-api-access-cq94l\") pod \"swift-operator-controller-manager-9b9ff9f4d-k88bh\" (UID: \"3216ee69-307e-4151-889b-6e71f6e8c47a\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.890267 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.890318 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzjh\" (UniqueName: \"kubernetes.io/projected/d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3-kube-api-access-vkzjh\") pod \"placement-operator-controller-manager-648564c9fc-ptbxt\" (UID: \"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.890370 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/f82c21a8-e080-4d70-b898-8c15a7b71989-kube-api-access-qsw2p\") pod \"telemetry-operator-controller-manager-6dff66bc49-x8f92\" (UID: \"f82c21a8-e080-4d70-b898-8c15a7b71989\") " pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.891232 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: E0308 00:42:04.891279 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:05.891265373 +0000 UTC m=+1147.365409717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.927305 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq94l\" (UniqueName: \"kubernetes.io/projected/3216ee69-307e-4151-889b-6e71f6e8c47a-kube-api-access-cq94l\") pod \"swift-operator-controller-manager-9b9ff9f4d-k88bh\" (UID: \"3216ee69-307e-4151-889b-6e71f6e8c47a\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.927734 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzjh\" (UniqueName: \"kubernetes.io/projected/d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3-kube-api-access-vkzjh\") pod \"placement-operator-controller-manager-648564c9fc-ptbxt\" (UID: \"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.942433 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92"] Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.974472 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.991429 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2l6w\" (UniqueName: \"kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w\") pod \"88d54f80-7455-4a4b-8d9e-b5e24de88ed5\" (UID: \"88d54f80-7455-4a4b-8d9e-b5e24de88ed5\") " Mar 08 00:42:04 crc kubenswrapper[4762]: I0308 00:42:04.991841 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/f82c21a8-e080-4d70-b898-8c15a7b71989-kube-api-access-qsw2p\") pod \"telemetry-operator-controller-manager-6dff66bc49-x8f92\" (UID: \"f82c21a8-e080-4d70-b898-8c15a7b71989\") " pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.003459 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l"] Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.003818 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d54f80-7455-4a4b-8d9e-b5e24de88ed5" containerName="oc" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.003834 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d54f80-7455-4a4b-8d9e-b5e24de88ed5" containerName="oc" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.004420 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.004600 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d54f80-7455-4a4b-8d9e-b5e24de88ed5" containerName="oc" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.005014 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w" (OuterVolumeSpecName: "kube-api-access-h2l6w") pod "88d54f80-7455-4a4b-8d9e-b5e24de88ed5" (UID: "88d54f80-7455-4a4b-8d9e-b5e24de88ed5"). InnerVolumeSpecName "kube-api-access-h2l6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.006015 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.010165 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gnfv9" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.015634 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsw2p\" (UniqueName: \"kubernetes.io/projected/f82c21a8-e080-4d70-b898-8c15a7b71989-kube-api-access-qsw2p\") pod \"telemetry-operator-controller-manager-6dff66bc49-x8f92\" (UID: \"f82c21a8-e080-4d70-b898-8c15a7b71989\") " pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.035882 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.088446 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.094019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnjgv\" (UniqueName: \"kubernetes.io/projected/1bc55675-0793-4489-b05d-03581df96527-kube-api-access-hnjgv\") pod \"test-operator-controller-manager-55b5ff4dbb-zfk9l\" (UID: \"1bc55675-0793-4489-b05d-03581df96527\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.094207 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2l6w\" (UniqueName: \"kubernetes.io/projected/88d54f80-7455-4a4b-8d9e-b5e24de88ed5-kube-api-access-h2l6w\") on node \"crc\" DevicePath \"\"" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.100420 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.101476 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.114747 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-5fzqc" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.124427 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk"] Mar 08 00:42:05 crc kubenswrapper[4762]: W0308 00:42:05.164795 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20b130fa_d7f7_441a_bd96_0d5858f1ece1.slice/crio-5b2eac833807150fa42a24ea3557deb5818ba3cc7a3a224437de8755f57976c9 WatchSource:0}: Error finding container 5b2eac833807150fa42a24ea3557deb5818ba3cc7a3a224437de8755f57976c9: Status 404 returned error can't find the container with id 5b2eac833807150fa42a24ea3557deb5818ba3cc7a3a224437de8755f57976c9 Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.181043 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.182400 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.185685 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.185809 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.185729 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-z2f8c" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.191263 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.195293 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmrs\" (UniqueName: \"kubernetes.io/projected/8e8be3de-e055-441d-bfff-7b966b35dc15-kube-api-access-tzmrs\") pod \"watcher-operator-controller-manager-bccc79885-5r7nk\" (UID: \"8e8be3de-e055-441d-bfff-7b966b35dc15\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.195380 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnjgv\" (UniqueName: \"kubernetes.io/projected/1bc55675-0793-4489-b05d-03581df96527-kube-api-access-hnjgv\") pod \"test-operator-controller-manager-55b5ff4dbb-zfk9l\" (UID: \"1bc55675-0793-4489-b05d-03581df96527\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.207853 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.209979 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.212357 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.214193 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-56gk8" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.222099 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnjgv\" (UniqueName: \"kubernetes.io/projected/1bc55675-0793-4489-b05d-03581df96527-kube-api-access-hnjgv\") pod \"test-operator-controller-manager-55b5ff4dbb-zfk9l\" (UID: \"1bc55675-0793-4489-b05d-03581df96527\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.236008 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.256398 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.272490 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.286731 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" event={"ID":"20b130fa-d7f7-441a-bd96-0d5858f1ece1","Type":"ContainerStarted","Data":"5b2eac833807150fa42a24ea3557deb5818ba3cc7a3a224437de8755f57976c9"} Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.287238 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.287261 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548842-2fzrv" event={"ID":"88d54f80-7455-4a4b-8d9e-b5e24de88ed5","Type":"ContainerDied","Data":"bcbbde3c7e239157dc751c2645bf940480a29fc3ef0d42d7ff91a72394bd36df"} Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.287279 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcbbde3c7e239157dc751c2645bf940480a29fc3ef0d42d7ff91a72394bd36df" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.299144 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.299215 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.299274 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc49t\" (UniqueName: \"kubernetes.io/projected/4d895a55-fc09-4986-ae61-19b0c5425d15-kube-api-access-sc49t\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.299617 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvwjv\" (UniqueName: \"kubernetes.io/projected/c872048a-5196-4f23-97e2-ce9e611c9ea0-kube-api-access-lvwjv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5546f\" (UID: \"c872048a-5196-4f23-97e2-ce9e611c9ea0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.299679 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzmrs\" (UniqueName: \"kubernetes.io/projected/8e8be3de-e055-441d-bfff-7b966b35dc15-kube-api-access-tzmrs\") pod \"watcher-operator-controller-manager-bccc79885-5r7nk\" (UID: \"8e8be3de-e055-441d-bfff-7b966b35dc15\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.317080 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzmrs\" (UniqueName: \"kubernetes.io/projected/8e8be3de-e055-441d-bfff-7b966b35dc15-kube-api-access-tzmrs\") pod \"watcher-operator-controller-manager-bccc79885-5r7nk\" (UID: \"8e8be3de-e055-441d-bfff-7b966b35dc15\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.331667 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.401845 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc49t\" (UniqueName: \"kubernetes.io/projected/4d895a55-fc09-4986-ae61-19b0c5425d15-kube-api-access-sc49t\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.401925 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.401974 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvwjv\" (UniqueName: \"kubernetes.io/projected/c872048a-5196-4f23-97e2-ce9e611c9ea0-kube-api-access-lvwjv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5546f\" (UID: \"c872048a-5196-4f23-97e2-ce9e611c9ea0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.402009 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.402037 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.402180 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.402230 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:05.902215889 +0000 UTC m=+1147.376360233 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.402614 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.402660 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:06.40265052 +0000 UTC m=+1147.876794874 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.403033 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.403072 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:05.90306226 +0000 UTC m=+1147.377206604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.429560 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvwjv\" (UniqueName: \"kubernetes.io/projected/c872048a-5196-4f23-97e2-ce9e611c9ea0-kube-api-access-lvwjv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-5546f\" (UID: \"c872048a-5196-4f23-97e2-ce9e611c9ea0\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.434086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc49t\" (UniqueName: \"kubernetes.io/projected/4d895a55-fc09-4986-ae61-19b0c5425d15-kube-api-access-sc49t\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.437155 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.579972 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz"] Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.592204 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb"] Mar 08 00:42:05 crc kubenswrapper[4762]: W0308 00:42:05.601980 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod625fe5b5_181a_47db_8656_00c8f5fc045f.slice/crio-dcadfe330584879b4fa9288958db1e0626222034edae39a7c28e38615eb0a99e WatchSource:0}: Error finding container dcadfe330584879b4fa9288958db1e0626222034edae39a7c28e38615eb0a99e: Status 404 returned error can't find the container with id dcadfe330584879b4fa9288958db1e0626222034edae39a7c28e38615eb0a99e Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.604189 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b"] Mar 08 00:42:05 crc kubenswrapper[4762]: W0308 00:42:05.611121 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2cdcc67_fa0d_4f82_9ca7_219626ee5fdd.slice/crio-5ae5c0723d9e3ae56978e04fc2f26cace25f935eb84b27b148a0bf0eccac9191 WatchSource:0}: Error finding container 5ae5c0723d9e3ae56978e04fc2f26cace25f935eb84b27b148a0bf0eccac9191: Status 404 returned error can't find the container with id 5ae5c0723d9e3ae56978e04fc2f26cace25f935eb84b27b148a0bf0eccac9191 Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.659858 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s"] Mar 08 00:42:05 crc kubenswrapper[4762]: W0308 00:42:05.665834 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f0be01_26e9_4c4e_8122_61659529e505.slice/crio-d88a42eccac8f0cde8fe4885f9525edc51926698ec19d90838f7bd1df148635a WatchSource:0}: Error finding container d88a42eccac8f0cde8fe4885f9525edc51926698ec19d90838f7bd1df148635a: Status 404 returned error can't find the container with id d88a42eccac8f0cde8fe4885f9525edc51926698ec19d90838f7bd1df148635a Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.720154 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.914389 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.914444 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.914477 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914617 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914702 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:06.914680242 +0000 UTC m=+1148.388824586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914707 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914735 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914812 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:07.914795284 +0000 UTC m=+1149.388939628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: E0308 00:42:05.914829 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:06.914822425 +0000 UTC m=+1148.388966769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:05 crc kubenswrapper[4762]: I0308 00:42:05.992100 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548836-kfpfl"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.015368 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548836-kfpfl"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.042249 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.059935 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda66283d_dd88_4e6a_a4ad_496064bc8a78.slice/crio-a91eb5f1c086f9ba59230d0d5b75d2bd0ab5cf082903b327f9c99a40f355179c WatchSource:0}: Error finding container a91eb5f1c086f9ba59230d0d5b75d2bd0ab5cf082903b327f9c99a40f355179c: Status 404 returned error can't find the container with id a91eb5f1c086f9ba59230d0d5b75d2bd0ab5cf082903b327f9c99a40f355179c Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.062834 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.070131 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.072794 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd999e2f1_c8f0_4e5f_b4ad_502b3e5413e3.slice/crio-8b0f6440c1b8b830aba0d3afa7ed6c73d2967dfbe8c697caee93423f267ed7f4 WatchSource:0}: Error finding container 8b0f6440c1b8b830aba0d3afa7ed6c73d2967dfbe8c697caee93423f267ed7f4: Status 404 returned error can't find the container with id 8b0f6440c1b8b830aba0d3afa7ed6c73d2967dfbe8c697caee93423f267ed7f4 Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.076299 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.084738 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.085497 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5edc85d7_4f23_4c94_a998_17f8402c37d3.slice/crio-cb75292a039e5d549451a46dea28728c45620397ec6fd60fa7e4e43554d139f7 WatchSource:0}: Error finding container cb75292a039e5d549451a46dea28728c45620397ec6fd60fa7e4e43554d139f7: Status 404 returned error can't find the container with id cb75292a039e5d549451a46dea28728c45620397ec6fd60fa7e4e43554d139f7 Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.087921 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05d1f89d_b2b2_48ff_8555_e9f68ac3300a.slice/crio-e7cb2018db82f0e71745718d609c27cac7f66d7846b8e412188790e19f466b74 WatchSource:0}: Error finding container e7cb2018db82f0e71745718d609c27cac7f66d7846b8e412188790e19f466b74: Status 404 returned error can't find the container with id e7cb2018db82f0e71745718d609c27cac7f66d7846b8e412188790e19f466b74 Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.093417 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.094048 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b30a18d_93d3_48de_9b32_7c2326e04220.slice/crio-b257835005590dfc897df69d5f030bde7af445a98c51419e1cfe6ac5ee17c825 WatchSource:0}: Error finding container b257835005590dfc897df69d5f030bde7af445a98c51419e1cfe6ac5ee17c825: Status 404 returned error can't find the container with id b257835005590dfc897df69d5f030bde7af445a98c51419e1cfe6ac5ee17c825 Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.120720 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.134814 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-m26xv"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.160651 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.167439 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.168043 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3216ee69_307e_4151_889b_6e71f6e8c47a.slice/crio-ff2d44a0483a43a691c5f35b9c3c7db71ea8d0954365140e96ac085a80b7701a WatchSource:0}: Error finding container ff2d44a0483a43a691c5f35b9c3c7db71ea8d0954365140e96ac085a80b7701a: Status 404 returned error can't find the container with id ff2d44a0483a43a691c5f35b9c3c7db71ea8d0954365140e96ac085a80b7701a Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.169945 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1906010e_f253_4d33_8e97_96d8860c3ff6.slice/crio-31667ce8e15837369ec7272767c4d9a8fef9526b4a344c2999e09464b7f0388b WatchSource:0}: Error finding container 31667ce8e15837369ec7272767c4d9a8fef9526b4a344c2999e09464b7f0388b: Status 404 returned error can't find the container with id 31667ce8e15837369ec7272767c4d9a8fef9526b4a344c2999e09464b7f0388b Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.170296 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq94l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-k88bh_openstack-operators(3216ee69-307e-4151-889b-6e71f6e8c47a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.171395 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.171872 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9855w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-dh78h_openstack-operators(1906010e-f253-4d33-8e97-96d8860c3ff6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.173106 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.291649 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" event={"ID":"2352d4f2-aadc-4ad7-806e-9324d3be5116","Type":"ContainerStarted","Data":"7edd67594ca585dd4e6d1fffa72807170f0ca994a08bded2fd668913828eb7ac"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.298067 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" event={"ID":"ac0364ec-ad05-431d-b2f4-c92353f15f4c","Type":"ContainerStarted","Data":"48d43839c40bc1f82cd5233993348fc0a60eb16ce631b834dce6f70385153a53"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.299943 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" event={"ID":"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3","Type":"ContainerStarted","Data":"8b0f6440c1b8b830aba0d3afa7ed6c73d2967dfbe8c697caee93423f267ed7f4"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.301663 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" event={"ID":"e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd","Type":"ContainerStarted","Data":"5ae5c0723d9e3ae56978e04fc2f26cace25f935eb84b27b148a0bf0eccac9191"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.303005 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" event={"ID":"7f6a4543-a300-4393-93e0-fcfeae3ccd61","Type":"ContainerStarted","Data":"52cc51ba2c8abf4567b4e2b5d8ce4bdfeca8fff787b91c6c29d63cfc0d914749"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.305048 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" event={"ID":"6b30a18d-93d3-48de-9b32-7c2326e04220","Type":"ContainerStarted","Data":"b257835005590dfc897df69d5f030bde7af445a98c51419e1cfe6ac5ee17c825"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.306761 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" event={"ID":"1906010e-f253-4d33-8e97-96d8860c3ff6","Type":"ContainerStarted","Data":"31667ce8e15837369ec7272767c4d9a8fef9526b4a344c2999e09464b7f0388b"} Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.308647 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.309051 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" event={"ID":"3216ee69-307e-4151-889b-6e71f6e8c47a","Type":"ContainerStarted","Data":"ff2d44a0483a43a691c5f35b9c3c7db71ea8d0954365140e96ac085a80b7701a"} Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.310409 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.312368 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" event={"ID":"5edc85d7-4f23-4c94-a998-17f8402c37d3","Type":"ContainerStarted","Data":"cb75292a039e5d549451a46dea28728c45620397ec6fd60fa7e4e43554d139f7"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.322782 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" event={"ID":"d5f0be01-26e9-4c4e-8122-61659529e505","Type":"ContainerStarted","Data":"d88a42eccac8f0cde8fe4885f9525edc51926698ec19d90838f7bd1df148635a"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.337820 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" event={"ID":"625fe5b5-181a-47db-8656-00c8f5fc045f","Type":"ContainerStarted","Data":"dcadfe330584879b4fa9288958db1e0626222034edae39a7c28e38615eb0a99e"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.341439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" event={"ID":"da66283d-dd88-4e6a-a4ad-496064bc8a78","Type":"ContainerStarted","Data":"a91eb5f1c086f9ba59230d0d5b75d2bd0ab5cf082903b327f9c99a40f355179c"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.344200 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" event={"ID":"60096a41-cef5-4818-a549-96b51b04cd8f","Type":"ContainerStarted","Data":"9ab8ab6ebc71a967f2a400e6d9a47886319fab5db31c55544a86543cf5fbd246"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.351320 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" event={"ID":"05d1f89d-b2b2-48ff-8555-e9f68ac3300a","Type":"ContainerStarted","Data":"e7cb2018db82f0e71745718d609c27cac7f66d7846b8e412188790e19f466b74"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.352562 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" event={"ID":"ead6b665-cd0f-475a-a71b-33fd36246484","Type":"ContainerStarted","Data":"7ab1f4de94b47b5fbe73494cce75081eb67d83f53a4956f229f9c7c98a466710"} Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.422480 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.422607 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.422672 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:08.422654351 +0000 UTC m=+1149.896798695 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.445055 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.451291 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.456857 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92"] Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.462113 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l"] Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.519752 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf82c21a8_e080_4d70_b898_8c15a7b71989.slice/crio-3bedb18d96f939f9636fbf95ba19ca461c7dfe3de7279ef0992625cb1a02b527 WatchSource:0}: Error finding container 3bedb18d96f939f9636fbf95ba19ca461c7dfe3de7279ef0992625cb1a02b527: Status 404 returned error can't find the container with id 3bedb18d96f939f9636fbf95ba19ca461c7dfe3de7279ef0992625cb1a02b527 Mar 08 00:42:06 crc kubenswrapper[4762]: W0308 00:42:06.520422 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc872048a_5196_4f23_97e2_ce9e611c9ea0.slice/crio-f79d480e634e37c55d4a37fd892dbfb9473eacbc2fdadf891e326c3191e604cb WatchSource:0}: Error finding container f79d480e634e37c55d4a37fd892dbfb9473eacbc2fdadf891e326c3191e604cb: Status 404 returned error can't find the container with id f79d480e634e37c55d4a37fd892dbfb9473eacbc2fdadf891e326c3191e604cb Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.621399 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnjgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-zfk9l_openstack-operators(1bc55675-0793-4489-b05d-03581df96527): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.625341 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.942747 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.943070 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:08.943041973 +0000 UTC m=+1150.417186317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.943714 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:06 crc kubenswrapper[4762]: I0308 00:42:06.943872 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.944098 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:06 crc kubenswrapper[4762]: E0308 00:42:06.944197 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:08.944174202 +0000 UTC m=+1150.418318546 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.291433 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a16712a-b605-4c8b-9382-8c630322ca2c" path="/var/lib/kubelet/pods/0a16712a-b605-4c8b-9382-8c630322ca2c/volumes" Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.364090 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" event={"ID":"f82c21a8-e080-4d70-b898-8c15a7b71989","Type":"ContainerStarted","Data":"3bedb18d96f939f9636fbf95ba19ca461c7dfe3de7279ef0992625cb1a02b527"} Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.365643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" event={"ID":"c872048a-5196-4f23-97e2-ce9e611c9ea0","Type":"ContainerStarted","Data":"f79d480e634e37c55d4a37fd892dbfb9473eacbc2fdadf891e326c3191e604cb"} Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.367313 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" event={"ID":"1bc55675-0793-4489-b05d-03581df96527","Type":"ContainerStarted","Data":"e3ac345ad5859732d7b6eaa656e08ab22c95410b690438b1899ff95dd294328c"} Mar 08 00:42:07 crc kubenswrapper[4762]: E0308 00:42:07.378746 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.379813 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" event={"ID":"8e8be3de-e055-441d-bfff-7b966b35dc15","Type":"ContainerStarted","Data":"386bdde9dd122533acf871e91c01794012acad93b8c7ff8e40da327077ecf600"} Mar 08 00:42:07 crc kubenswrapper[4762]: E0308 00:42:07.381148 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" Mar 08 00:42:07 crc kubenswrapper[4762]: E0308 00:42:07.390428 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" Mar 08 00:42:07 crc kubenswrapper[4762]: I0308 00:42:07.961699 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:07 crc kubenswrapper[4762]: E0308 00:42:07.961986 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:07 crc kubenswrapper[4762]: E0308 00:42:07.962052 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:11.962037369 +0000 UTC m=+1153.436181713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.405745 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" Mar 08 00:42:08 crc kubenswrapper[4762]: I0308 00:42:08.469281 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.469488 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.469545 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:12.469525827 +0000 UTC m=+1153.943670171 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: I0308 00:42:08.981020 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:08 crc kubenswrapper[4762]: I0308 00:42:08.981077 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.981228 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.981264 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.981317 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:12.981298402 +0000 UTC m=+1154.455442746 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:08 crc kubenswrapper[4762]: E0308 00:42:08.981339 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:12.981331683 +0000 UTC m=+1154.455476027 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.046845 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:12 crc kubenswrapper[4762]: E0308 00:42:12.047095 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:12 crc kubenswrapper[4762]: E0308 00:42:12.047525 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:20.047502824 +0000 UTC m=+1161.521647168 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.554536 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:12 crc kubenswrapper[4762]: E0308 00:42:12.554778 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:12 crc kubenswrapper[4762]: E0308 00:42:12.554877 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:20.554850978 +0000 UTC m=+1162.028995332 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.851663 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.851735 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.851820 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.852603 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:42:12 crc kubenswrapper[4762]: I0308 00:42:12.852695 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30" gracePeriod=600 Mar 08 00:42:13 crc kubenswrapper[4762]: I0308 00:42:13.062367 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:13 crc kubenswrapper[4762]: I0308 00:42:13.062416 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:13 crc kubenswrapper[4762]: E0308 00:42:13.062584 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:13 crc kubenswrapper[4762]: E0308 00:42:13.062629 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:21.062615473 +0000 UTC m=+1162.536759817 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:13 crc kubenswrapper[4762]: E0308 00:42:13.062973 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:13 crc kubenswrapper[4762]: E0308 00:42:13.063003 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:21.062993442 +0000 UTC m=+1162.537137786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:13 crc kubenswrapper[4762]: I0308 00:42:13.444811 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30" exitCode=0 Mar 08 00:42:13 crc kubenswrapper[4762]: I0308 00:42:13.444920 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30"} Mar 08 00:42:13 crc kubenswrapper[4762]: I0308 00:42:13.445454 4762 scope.go:117] "RemoveContainer" containerID="c2606c2d00d50bbf62802680db1373962883eda1c9950ebf12d9d6c0b5953df4" Mar 08 00:42:20 crc kubenswrapper[4762]: I0308 00:42:20.091559 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.091883 4762 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.092064 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert podName:bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b nodeName:}" failed. No retries permitted until 2026-03-08 00:42:36.09202295 +0000 UTC m=+1177.566167324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert") pod "infra-operator-controller-manager-f7fcc58b9-dtdxk" (UID: "bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b") : secret "infra-operator-webhook-server-cert" not found Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.221544 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.221789 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzmrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-5r7nk_openstack-operators(8e8be3de-e055-441d-bfff-7b966b35dc15): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.222974 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.520243 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.606608 4762 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.607023 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert podName:8fc55d76-cb72-4ac9-b132-24b997e298a3 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:36.607007256 +0000 UTC m=+1178.081151660 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" (UID: "8fc55d76-cb72-4ac9-b132-24b997e298a3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 08 00:42:20 crc kubenswrapper[4762]: I0308 00:42:20.606473 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.770145 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.770288 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vkzjh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-ptbxt_openstack-operators(d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:42:20 crc kubenswrapper[4762]: E0308 00:42:20.771500 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" Mar 08 00:42:21 crc kubenswrapper[4762]: I0308 00:42:21.114980 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:21 crc kubenswrapper[4762]: I0308 00:42:21.115034 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.115273 4762 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.115330 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:37.115315094 +0000 UTC m=+1178.589459428 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "metrics-server-cert" not found Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.115356 4762 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.115456 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs podName:4d895a55-fc09-4986-ae61-19b0c5425d15 nodeName:}" failed. No retries permitted until 2026-03-08 00:42:37.115416326 +0000 UTC m=+1178.589560700 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs") pod "openstack-operator-controller-manager-7585f757fc-xgd5r" (UID: "4d895a55-fc09-4986-ae61-19b0c5425d15") : secret "webhook-server-cert" not found Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.317643 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.317931 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bmjjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7c789f89c6-hwdww_openstack-operators(ead6b665-cd0f-475a-a71b-33fd36246484): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.319827 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.540771 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.540792 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9d723ab33964ee44704eed3223b64e828349d45dee04695434a6fcf4b6807d4c\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.768037 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.768223 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvwjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-5546f_openstack-operators(c872048a-5196-4f23-97e2-ce9e611c9ea0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:42:21 crc kubenswrapper[4762]: E0308 00:42:21.770539 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" podUID="c872048a-5196-4f23-97e2-ce9e611c9ea0" Mar 08 00:42:22 crc kubenswrapper[4762]: E0308 00:42:22.255251 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Mar 08 00:42:22 crc kubenswrapper[4762]: E0308 00:42:22.255427 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xfpfl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-xmkb6_openstack-operators(05d1f89d-b2b2-48ff-8555-e9f68ac3300a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:42:22 crc kubenswrapper[4762]: E0308 00:42:22.256925 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" Mar 08 00:42:22 crc kubenswrapper[4762]: E0308 00:42:22.535908 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" podUID="c872048a-5196-4f23-97e2-ce9e611c9ea0" Mar 08 00:42:22 crc kubenswrapper[4762]: E0308 00:42:22.536185 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.589017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" event={"ID":"60096a41-cef5-4818-a549-96b51b04cd8f","Type":"ContainerStarted","Data":"85b6d5ce87ed26e4662119f510bc2f85da9c514783596a3ad74ec20e820eae25"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.591491 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.611026 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" event={"ID":"625fe5b5-181a-47db-8656-00c8f5fc045f","Type":"ContainerStarted","Data":"7f3dc2c2755d85caffbc81f75f546e4ce45e0ade66278460b8997de77a5548a1"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.612020 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.626004 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" event={"ID":"6b30a18d-93d3-48de-9b32-7c2326e04220","Type":"ContainerStarted","Data":"f858113b55891e2c17afbc2db41c811701ed452d2b386dfb2a62beb9b43a4dff"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.626968 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.658010 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" event={"ID":"2352d4f2-aadc-4ad7-806e-9324d3be5116","Type":"ContainerStarted","Data":"54e87d48443a9e2100f7b0208291f5744ffbafd738aa53f7acdb15b2492d5c52"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.658055 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.659828 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" event={"ID":"3216ee69-307e-4151-889b-6e71f6e8c47a","Type":"ContainerStarted","Data":"70737e4f4959882428b8977870051d3f4295fdc25ada91a048c3f1cfa5abcd62"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.660063 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.670851 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" event={"ID":"f82c21a8-e080-4d70-b898-8c15a7b71989","Type":"ContainerStarted","Data":"4f07ee93764f10346ebc352361b1031d344fce8c025603d87461cc6fbe8864b9"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.671199 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.690474 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" event={"ID":"da66283d-dd88-4e6a-a4ad-496064bc8a78","Type":"ContainerStarted","Data":"7bf4c576eaffa348202b63f4de18182a837af22202f5d08a94228a7e67284364"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.690891 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.693741 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podStartSLOduration=5.269679366 podStartE2EDuration="21.693722371s" podCreationTimestamp="2026-03-08 00:42:03 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.320423579 +0000 UTC m=+1146.794567923" lastFinishedPulling="2026-03-08 00:42:21.744466574 +0000 UTC m=+1163.218610928" observedRunningTime="2026-03-08 00:42:24.627243118 +0000 UTC m=+1166.101387452" watchObservedRunningTime="2026-03-08 00:42:24.693722371 +0000 UTC m=+1166.167866715" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.694431 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" event={"ID":"d5f0be01-26e9-4c4e-8122-61659529e505","Type":"ContainerStarted","Data":"cf2703ca9cd2647940ecb23f105cf657c29320aac4e9a687d0d3ca55302a9f0b"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.695215 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.707097 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" event={"ID":"7f6a4543-a300-4393-93e0-fcfeae3ccd61","Type":"ContainerStarted","Data":"0246362cc44d56f81837ceb039eeba3f9f1795ca573f22f6d5769c4cdec60857"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.708066 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.709821 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" event={"ID":"e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd","Type":"ContainerStarted","Data":"73d86d48f527f7e25b2705cc89b2f441832d61d1a812832b1d48cc7404030c66"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.710536 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.719248 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podStartSLOduration=4.093800887 podStartE2EDuration="20.719224663s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.609530498 +0000 UTC m=+1147.083674842" lastFinishedPulling="2026-03-08 00:42:22.234954264 +0000 UTC m=+1163.709098618" observedRunningTime="2026-03-08 00:42:24.703832155 +0000 UTC m=+1166.177976489" watchObservedRunningTime="2026-03-08 00:42:24.719224663 +0000 UTC m=+1166.193369007" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.720712 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podStartSLOduration=4.583823595 podStartE2EDuration="20.72070529s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.097607707 +0000 UTC m=+1147.571752051" lastFinishedPulling="2026-03-08 00:42:22.234489402 +0000 UTC m=+1163.708633746" observedRunningTime="2026-03-08 00:42:24.690048648 +0000 UTC m=+1166.164193012" watchObservedRunningTime="2026-03-08 00:42:24.72070529 +0000 UTC m=+1166.194849634" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.736583 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podStartSLOduration=2.9054633770000002 podStartE2EDuration="20.73656469s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.63105022 +0000 UTC m=+1147.105194564" lastFinishedPulling="2026-03-08 00:42:23.462151533 +0000 UTC m=+1164.936295877" observedRunningTime="2026-03-08 00:42:24.729653226 +0000 UTC m=+1166.203797570" watchObservedRunningTime="2026-03-08 00:42:24.73656469 +0000 UTC m=+1166.210709034" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.738708 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.755537 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" event={"ID":"1906010e-f253-4d33-8e97-96d8860c3ff6","Type":"ContainerStarted","Data":"af8c696f862ade0f147156b07466d75efdb92dcb7add1858f5a4619f9727121f"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.756480 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.767773 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podStartSLOduration=4.598528065 podStartE2EDuration="20.767737334s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.065920219 +0000 UTC m=+1147.540064563" lastFinishedPulling="2026-03-08 00:42:22.235129468 +0000 UTC m=+1163.709273832" observedRunningTime="2026-03-08 00:42:24.763039886 +0000 UTC m=+1166.237184230" watchObservedRunningTime="2026-03-08 00:42:24.767737334 +0000 UTC m=+1166.241881688" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.770492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" event={"ID":"ac0364ec-ad05-431d-b2f4-c92353f15f4c","Type":"ContainerStarted","Data":"6acf7ee501e8a835f270ad8043e28529e07afd4172cc2e256467719dec33ff78"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.770967 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.787010 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" event={"ID":"5edc85d7-4f23-4c94-a998-17f8402c37d3","Type":"ContainerStarted","Data":"216408102c68c6490932598429145a0b9d34a768c4dcb4daf818646e77686486"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.787745 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.812664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" event={"ID":"20b130fa-d7f7-441a-bd96-0d5858f1ece1","Type":"ContainerStarted","Data":"fd1c28de8738fca4a59a122e825168621ce76c992957cc70cf9ecd2772eadcf1"} Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.813407 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.818458 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podStartSLOduration=3.366751041 podStartE2EDuration="20.818447591s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.170164124 +0000 UTC m=+1147.644308468" lastFinishedPulling="2026-03-08 00:42:23.621860674 +0000 UTC m=+1165.096005018" observedRunningTime="2026-03-08 00:42:24.787984334 +0000 UTC m=+1166.262128668" watchObservedRunningTime="2026-03-08 00:42:24.818447591 +0000 UTC m=+1166.292591935" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.821039 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podStartSLOduration=5.133010312 podStartE2EDuration="20.821033196s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.547121515 +0000 UTC m=+1148.021265849" lastFinishedPulling="2026-03-08 00:42:22.235144369 +0000 UTC m=+1163.709288733" observedRunningTime="2026-03-08 00:42:24.81797009 +0000 UTC m=+1166.292114434" watchObservedRunningTime="2026-03-08 00:42:24.821033196 +0000 UTC m=+1166.295177540" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.869106 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podStartSLOduration=4.794315765 podStartE2EDuration="20.869084486s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.668909844 +0000 UTC m=+1147.143054188" lastFinishedPulling="2026-03-08 00:42:21.743678565 +0000 UTC m=+1163.217822909" observedRunningTime="2026-03-08 00:42:24.840945818 +0000 UTC m=+1166.315090162" watchObservedRunningTime="2026-03-08 00:42:24.869084486 +0000 UTC m=+1166.343228830" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.925462 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podStartSLOduration=5.2922445719999995 podStartE2EDuration="20.925445535s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.111264231 +0000 UTC m=+1147.585408575" lastFinishedPulling="2026-03-08 00:42:21.744465194 +0000 UTC m=+1163.218609538" observedRunningTime="2026-03-08 00:42:24.919189227 +0000 UTC m=+1166.393333571" watchObservedRunningTime="2026-03-08 00:42:24.925445535 +0000 UTC m=+1166.399589879" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.956822 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" podStartSLOduration=5.415740963 podStartE2EDuration="21.956803845s" podCreationTimestamp="2026-03-08 00:42:03 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.201909665 +0000 UTC m=+1146.676054009" lastFinishedPulling="2026-03-08 00:42:21.742972547 +0000 UTC m=+1163.217116891" observedRunningTime="2026-03-08 00:42:24.936618427 +0000 UTC m=+1166.410762771" watchObservedRunningTime="2026-03-08 00:42:24.956803845 +0000 UTC m=+1166.430948179" Mar 08 00:42:24 crc kubenswrapper[4762]: I0308 00:42:24.980344 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podStartSLOduration=4.839680149 podStartE2EDuration="20.980328318s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.094204942 +0000 UTC m=+1147.568349286" lastFinishedPulling="2026-03-08 00:42:22.234853111 +0000 UTC m=+1163.708997455" observedRunningTime="2026-03-08 00:42:24.975081075 +0000 UTC m=+1166.449225409" watchObservedRunningTime="2026-03-08 00:42:24.980328318 +0000 UTC m=+1166.454472662" Mar 08 00:42:25 crc kubenswrapper[4762]: I0308 00:42:25.005623 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podStartSLOduration=3.567983139 podStartE2EDuration="21.005607284s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.171799735 +0000 UTC m=+1147.645944079" lastFinishedPulling="2026-03-08 00:42:23.60942387 +0000 UTC m=+1165.083568224" observedRunningTime="2026-03-08 00:42:25.00310562 +0000 UTC m=+1166.477249974" watchObservedRunningTime="2026-03-08 00:42:25.005607284 +0000 UTC m=+1166.479751628" Mar 08 00:42:25 crc kubenswrapper[4762]: I0308 00:42:25.024286 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podStartSLOduration=4.404147451 podStartE2EDuration="21.024267314s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:05.616632667 +0000 UTC m=+1147.090777011" lastFinishedPulling="2026-03-08 00:42:22.23675253 +0000 UTC m=+1163.710896874" observedRunningTime="2026-03-08 00:42:25.024244763 +0000 UTC m=+1166.498389107" watchObservedRunningTime="2026-03-08 00:42:25.024267314 +0000 UTC m=+1166.498411658" Mar 08 00:42:25 crc kubenswrapper[4762]: I0308 00:42:25.056793 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podStartSLOduration=5.406784156 podStartE2EDuration="21.056776212s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.094492969 +0000 UTC m=+1147.568637313" lastFinishedPulling="2026-03-08 00:42:21.744485025 +0000 UTC m=+1163.218629369" observedRunningTime="2026-03-08 00:42:25.043806826 +0000 UTC m=+1166.517951170" watchObservedRunningTime="2026-03-08 00:42:25.056776212 +0000 UTC m=+1166.530920556" Mar 08 00:42:32 crc kubenswrapper[4762]: I0308 00:42:32.888314 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" event={"ID":"1bc55675-0793-4489-b05d-03581df96527","Type":"ContainerStarted","Data":"11cf3d19101d96f9c6a847a44bc8ce0b617c8b5c4e9323fbac2d440cd7e73406"} Mar 08 00:42:32 crc kubenswrapper[4762]: I0308 00:42:32.889094 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:32 crc kubenswrapper[4762]: I0308 00:42:32.911747 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podStartSLOduration=3.300855722 podStartE2EDuration="28.911721044s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.621251401 +0000 UTC m=+1148.095395745" lastFinishedPulling="2026-03-08 00:42:32.232116683 +0000 UTC m=+1173.706261067" observedRunningTime="2026-03-08 00:42:32.909551509 +0000 UTC m=+1174.383695853" watchObservedRunningTime="2026-03-08 00:42:32.911721044 +0000 UTC m=+1174.385865418" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.318110 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.373520 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.396476 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.397618 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.438174 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.526195 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.699152 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.811729 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.856002 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.856320 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.893413 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.947916 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" event={"ID":"8e8be3de-e055-441d-bfff-7b966b35dc15","Type":"ContainerStarted","Data":"28860753954b934f1d2cab67faa8722bec91c8b32b2b3718e4fd528a1f71283a"} Mar 08 00:42:34 crc kubenswrapper[4762]: I0308 00:42:34.948621 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.006478 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podStartSLOduration=2.936305044 podStartE2EDuration="31.006460317s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.549209007 +0000 UTC m=+1148.023353351" lastFinishedPulling="2026-03-08 00:42:34.61936428 +0000 UTC m=+1176.093508624" observedRunningTime="2026-03-08 00:42:35.004644682 +0000 UTC m=+1176.478789046" watchObservedRunningTime="2026-03-08 00:42:35.006460317 +0000 UTC m=+1176.480604651" Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.014430 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.112450 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.214661 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.960003 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" event={"ID":"c872048a-5196-4f23-97e2-ce9e611c9ea0","Type":"ContainerStarted","Data":"e53d653fcf859e7c96d8424c7384d641efd4a7905317f7d8c03a2b89c95af947"} Mar 08 00:42:35 crc kubenswrapper[4762]: I0308 00:42:35.981554 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" podStartSLOduration=3.878710072 podStartE2EDuration="31.981538918s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.620908323 +0000 UTC m=+1148.095052667" lastFinishedPulling="2026-03-08 00:42:34.723737179 +0000 UTC m=+1176.197881513" observedRunningTime="2026-03-08 00:42:35.97804875 +0000 UTC m=+1177.452193104" watchObservedRunningTime="2026-03-08 00:42:35.981538918 +0000 UTC m=+1177.455683262" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.097360 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.107565 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-dtdxk\" (UID: \"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.315389 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.597798 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk"] Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.610348 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.623596 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc55d76-cb72-4ac9-b132-24b997e298a3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2\" (UID: \"8fc55d76-cb72-4ac9-b132-24b997e298a3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.743319 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.972295 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" event={"ID":"05d1f89d-b2b2-48ff-8555-e9f68ac3300a","Type":"ContainerStarted","Data":"33f9e099d59c214fcb4f947449b80b4dd98513591a838e3ba41861a2dc9dbf94"} Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.972804 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.974234 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" event={"ID":"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b","Type":"ContainerStarted","Data":"57769a1fef56842d925d4ee0d8291c8211435574ca9e25a375d5a1b9b1b4b889"} Mar 08 00:42:36 crc kubenswrapper[4762]: I0308 00:42:36.992300 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podStartSLOduration=3.332986421 podStartE2EDuration="32.992270206s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.093440372 +0000 UTC m=+1147.567584716" lastFinishedPulling="2026-03-08 00:42:35.752724157 +0000 UTC m=+1177.226868501" observedRunningTime="2026-03-08 00:42:36.990808099 +0000 UTC m=+1178.464952483" watchObservedRunningTime="2026-03-08 00:42:36.992270206 +0000 UTC m=+1178.466414590" Mar 08 00:42:37 crc kubenswrapper[4762]: I0308 00:42:37.121324 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:37 crc kubenswrapper[4762]: I0308 00:42:37.121423 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:37 crc kubenswrapper[4762]: I0308 00:42:37.127122 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-metrics-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:37 crc kubenswrapper[4762]: I0308 00:42:37.128581 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4d895a55-fc09-4986-ae61-19b0c5425d15-webhook-certs\") pod \"openstack-operator-controller-manager-7585f757fc-xgd5r\" (UID: \"4d895a55-fc09-4986-ae61-19b0c5425d15\") " pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:37 crc kubenswrapper[4762]: I0308 00:42:37.184440 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:38 crc kubenswrapper[4762]: I0308 00:42:38.455486 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r"] Mar 08 00:42:38 crc kubenswrapper[4762]: I0308 00:42:38.611911 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2"] Mar 08 00:42:38 crc kubenswrapper[4762]: W0308 00:42:38.616593 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc55d76_cb72_4ac9_b132_24b997e298a3.slice/crio-608822e9fd1286991e03faebce8d3a055eaf50c3d757747cc3c3906d4dbce2ed WatchSource:0}: Error finding container 608822e9fd1286991e03faebce8d3a055eaf50c3d757747cc3c3906d4dbce2ed: Status 404 returned error can't find the container with id 608822e9fd1286991e03faebce8d3a055eaf50c3d757747cc3c3906d4dbce2ed Mar 08 00:42:39 crc kubenswrapper[4762]: I0308 00:42:39.001085 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" event={"ID":"8fc55d76-cb72-4ac9-b132-24b997e298a3","Type":"ContainerStarted","Data":"608822e9fd1286991e03faebce8d3a055eaf50c3d757747cc3c3906d4dbce2ed"} Mar 08 00:42:39 crc kubenswrapper[4762]: I0308 00:42:39.004389 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" event={"ID":"4d895a55-fc09-4986-ae61-19b0c5425d15","Type":"ContainerStarted","Data":"dddc69c710d1432a4fddbcc50301c9d3faf05ece42f6fef1d22041d900146e84"} Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.037518 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" event={"ID":"4d895a55-fc09-4986-ae61-19b0c5425d15","Type":"ContainerStarted","Data":"e2bc83864bccd0148d70ed4df7834154699d465638d62e1abf47951b4b886ccb"} Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.037632 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.039788 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" event={"ID":"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3","Type":"ContainerStarted","Data":"a5bfcd2e020af424f26fdb487fb70a6a9480ba03790fbb4d2900fc010885abe5"} Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.040007 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.041696 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" event={"ID":"ead6b665-cd0f-475a-a71b-33fd36246484","Type":"ContainerStarted","Data":"c73850d91bf86553638515fb0988a586ddf0b36c7c32af47b42d48a3052a29fa"} Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.041923 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.082046 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podStartSLOduration=36.082029021 podStartE2EDuration="36.082029021s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:42:40.062159317 +0000 UTC m=+1181.536303681" watchObservedRunningTime="2026-03-08 00:42:40.082029021 +0000 UTC m=+1181.556173365" Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.082596 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podStartSLOduration=2.993991044 podStartE2EDuration="36.082592178s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.078246649 +0000 UTC m=+1147.552390993" lastFinishedPulling="2026-03-08 00:42:39.166847783 +0000 UTC m=+1180.640992127" observedRunningTime="2026-03-08 00:42:40.078699679 +0000 UTC m=+1181.552844033" watchObservedRunningTime="2026-03-08 00:42:40.082592178 +0000 UTC m=+1181.556736522" Mar 08 00:42:40 crc kubenswrapper[4762]: I0308 00:42:40.096640 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podStartSLOduration=3.031943044 podStartE2EDuration="36.096619705s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:06.104192283 +0000 UTC m=+1147.578336627" lastFinishedPulling="2026-03-08 00:42:39.168868944 +0000 UTC m=+1180.643013288" observedRunningTime="2026-03-08 00:42:40.091913962 +0000 UTC m=+1181.566058306" watchObservedRunningTime="2026-03-08 00:42:40.096619705 +0000 UTC m=+1181.570764059" Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.077198 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" event={"ID":"8fc55d76-cb72-4ac9-b132-24b997e298a3","Type":"ContainerStarted","Data":"1d211df9928790d0448bc1bbf2b5df004becd2a552faa846eb44cec7a37ebd49"} Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.077555 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.080376 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" event={"ID":"bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b","Type":"ContainerStarted","Data":"77aaea64df4ef233c39cf35b6026b3d18772f992eff93d7486c34cc57c3d8ee9"} Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.080528 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.110401 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podStartSLOduration=35.860535876 podStartE2EDuration="39.110376453s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:38.619134269 +0000 UTC m=+1180.093278613" lastFinishedPulling="2026-03-08 00:42:41.868974836 +0000 UTC m=+1183.343119190" observedRunningTime="2026-03-08 00:42:43.106852756 +0000 UTC m=+1184.580997170" watchObservedRunningTime="2026-03-08 00:42:43.110376453 +0000 UTC m=+1184.584520837" Mar 08 00:42:43 crc kubenswrapper[4762]: I0308 00:42:43.149385 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" podStartSLOduration=33.895810489 podStartE2EDuration="39.149365809s" podCreationTimestamp="2026-03-08 00:42:04 +0000 UTC" firstStartedPulling="2026-03-08 00:42:36.607899288 +0000 UTC m=+1178.082043662" lastFinishedPulling="2026-03-08 00:42:41.861454598 +0000 UTC m=+1183.335598982" observedRunningTime="2026-03-08 00:42:43.142096617 +0000 UTC m=+1184.616240981" watchObservedRunningTime="2026-03-08 00:42:43.149365809 +0000 UTC m=+1184.623510153" Mar 08 00:42:44 crc kubenswrapper[4762]: I0308 00:42:44.770602 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 00:42:44 crc kubenswrapper[4762]: I0308 00:42:44.870000 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 00:42:45 crc kubenswrapper[4762]: I0308 00:42:45.007246 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 00:42:45 crc kubenswrapper[4762]: I0308 00:42:45.335939 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 00:42:45 crc kubenswrapper[4762]: I0308 00:42:45.441267 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 00:42:47 crc kubenswrapper[4762]: I0308 00:42:47.196380 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 00:42:56 crc kubenswrapper[4762]: I0308 00:42:56.326057 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 00:42:56 crc kubenswrapper[4762]: I0308 00:42:56.750434 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 00:43:00 crc kubenswrapper[4762]: I0308 00:43:00.473399 4762 scope.go:117] "RemoveContainer" containerID="413c51e7f94dc6fa2448119be31d59a2f43d81236accfd17b5d1a149717b8e37" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.624295 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.638517 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.638642 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.643588 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.643753 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.644088 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-w2r88" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.644292 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.692811 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.694072 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.695723 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710544 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710598 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710627 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd79l\" (UniqueName: \"kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710658 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710704 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knwrq\" (UniqueName: \"kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.710910 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.812625 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.812690 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.812711 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd79l\" (UniqueName: \"kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.812734 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.812788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knwrq\" (UniqueName: \"kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.813728 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.813955 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.813968 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.833952 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd79l\" (UniqueName: \"kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l\") pod \"dnsmasq-dns-78dd6ddcc-rlt44\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.835467 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knwrq\" (UniqueName: \"kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq\") pod \"dnsmasq-dns-675f4bcbfc-66f2g\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:13 crc kubenswrapper[4762]: I0308 00:43:13.959327 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:14 crc kubenswrapper[4762]: I0308 00:43:14.011037 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:14 crc kubenswrapper[4762]: I0308 00:43:14.322841 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:14 crc kubenswrapper[4762]: I0308 00:43:14.330043 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:43:14 crc kubenswrapper[4762]: I0308 00:43:14.441176 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" event={"ID":"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37","Type":"ContainerStarted","Data":"c0b1efded8498bde1942f2d747b5a3281f6a16de8a34dc294edf74aeafb7ea1d"} Mar 08 00:43:14 crc kubenswrapper[4762]: I0308 00:43:14.476938 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:14 crc kubenswrapper[4762]: W0308 00:43:14.477975 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bee0137_07dd_45cb_bd57_7fe6246f422a.slice/crio-15a29596eeb9d3ed347cceecb86b57a14c0c36d9b86c0bbad7cbbe5004cbff8a WatchSource:0}: Error finding container 15a29596eeb9d3ed347cceecb86b57a14c0c36d9b86c0bbad7cbbe5004cbff8a: Status 404 returned error can't find the container with id 15a29596eeb9d3ed347cceecb86b57a14c0c36d9b86c0bbad7cbbe5004cbff8a Mar 08 00:43:15 crc kubenswrapper[4762]: I0308 00:43:15.458688 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" event={"ID":"6bee0137-07dd-45cb-bd57-7fe6246f422a","Type":"ContainerStarted","Data":"15a29596eeb9d3ed347cceecb86b57a14c0c36d9b86c0bbad7cbbe5004cbff8a"} Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.599208 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.629102 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.630611 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.664354 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.664412 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.664449 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4rds\" (UniqueName: \"kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.674262 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.766090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.766178 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4rds\" (UniqueName: \"kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.766290 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.767260 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.767933 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.794048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4rds\" (UniqueName: \"kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds\") pod \"dnsmasq-dns-5ccc8479f9-lh8v4\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.931787 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.955742 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.956905 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.967544 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:16 crc kubenswrapper[4762]: I0308 00:43:16.998151 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.073842 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.073913 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.073938 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxnd\" (UniqueName: \"kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.181724 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.181782 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxnd\" (UniqueName: \"kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.181877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.182780 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.183296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.222888 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxnd\" (UniqueName: \"kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd\") pod \"dnsmasq-dns-57d769cc4f-mt6gt\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.278177 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.620116 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.767623 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.771720 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.774425 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.774578 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.776131 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.776325 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.776377 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-brbgv" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.776455 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.776453 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.786298 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.809732 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810806 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810834 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810866 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810893 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810930 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810958 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlj2\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.810999 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.811019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.811040 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912373 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912412 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912428 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912459 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912484 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912514 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912538 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912555 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwlj2\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912573 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912591 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.912613 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.913337 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.913482 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.913801 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.913994 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.914529 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.916003 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.931974 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.932641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.932935 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.936954 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwlj2\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.938475 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:17 crc kubenswrapper[4762]: I0308 00:43:17.951582 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.099194 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.100347 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.107524 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.107788 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.107859 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.107939 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.108021 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.108049 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qpbz2" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.108111 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.115633 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.132283 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.217935 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.217989 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218021 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218260 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218369 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218421 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfsrh\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218465 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218503 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218529 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218549 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.218573 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.319977 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320031 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320052 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320096 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfsrh\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320114 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320149 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320165 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320199 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320217 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320229 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320612 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.320854 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.321430 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.321919 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.321951 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.322398 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.322744 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.325190 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.329523 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.329567 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.334742 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.337099 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfsrh\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.346825 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " pod="openstack/rabbitmq-server-0" Mar 08 00:43:18 crc kubenswrapper[4762]: I0308 00:43:18.429830 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.126908 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.129069 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.132708 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.133254 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.134172 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vn5zr" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.134498 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.147098 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.148833 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.238923 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.238973 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239021 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjxm\" (UniqueName: \"kubernetes.io/projected/0d82ab27-d2d8-486a-8514-2af542e4223a-kube-api-access-cxjxm\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239122 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239210 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239236 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239270 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.239484 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.342003 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.342056 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.342301 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346244 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.342212 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-default\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346323 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjxm\" (UniqueName: \"kubernetes.io/projected/0d82ab27-d2d8-486a-8514-2af542e4223a-kube-api-access-cxjxm\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346428 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346449 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346467 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.346484 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.347211 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0d82ab27-d2d8-486a-8514-2af542e4223a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.347384 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-kolla-config\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.348445 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0d82ab27-d2d8-486a-8514-2af542e4223a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.352928 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.355113 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d82ab27-d2d8-486a-8514-2af542e4223a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.365414 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjxm\" (UniqueName: \"kubernetes.io/projected/0d82ab27-d2d8-486a-8514-2af542e4223a-kube-api-access-cxjxm\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.375132 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-galera-0\" (UID: \"0d82ab27-d2d8-486a-8514-2af542e4223a\") " pod="openstack/openstack-galera-0" Mar 08 00:43:19 crc kubenswrapper[4762]: I0308 00:43:19.469390 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.558529 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.560261 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.561930 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-sxgp2" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.563126 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.563316 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.566165 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.569133 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666350 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666455 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666487 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666578 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666598 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f50a5390-b172-470a-bcfd-161e360d90db-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666801 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxmh6\" (UniqueName: \"kubernetes.io/projected/f50a5390-b172-470a-bcfd-161e360d90db-kube-api-access-qxmh6\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.666867 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770035 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770743 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770781 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f50a5390-b172-470a-bcfd-161e360d90db-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770826 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxmh6\" (UniqueName: \"kubernetes.io/projected/f50a5390-b172-470a-bcfd-161e360d90db-kube-api-access-qxmh6\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770844 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770881 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770923 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.770946 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.771261 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.771662 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.771874 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.772378 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f50a5390-b172-470a-bcfd-161e360d90db-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.772648 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f50a5390-b172-470a-bcfd-161e360d90db-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.785248 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.787693 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxmh6\" (UniqueName: \"kubernetes.io/projected/f50a5390-b172-470a-bcfd-161e360d90db-kube-api-access-qxmh6\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.815454 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.835834 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f50a5390-b172-470a-bcfd-161e360d90db-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f50a5390-b172-470a-bcfd-161e360d90db\") " pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.884003 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.893077 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.894076 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.895990 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mszjc" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.898110 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.899641 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.906710 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.974490 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.974817 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmq7x\" (UniqueName: \"kubernetes.io/projected/58ff3546-162f-4796-961b-2943d7465355-kube-api-access-hmq7x\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.974929 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-kolla-config\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.975012 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:20 crc kubenswrapper[4762]: I0308 00:43:20.975117 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-config-data\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.076412 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.076778 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmq7x\" (UniqueName: \"kubernetes.io/projected/58ff3546-162f-4796-961b-2943d7465355-kube-api-access-hmq7x\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.076932 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-kolla-config\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.077053 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.077199 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-config-data\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.077697 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-kolla-config\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.078173 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58ff3546-162f-4796-961b-2943d7465355-config-data\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.081008 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-combined-ca-bundle\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.081485 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/58ff3546-162f-4796-961b-2943d7465355-memcached-tls-certs\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.094839 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmq7x\" (UniqueName: \"kubernetes.io/projected/58ff3546-162f-4796-961b-2943d7465355-kube-api-access-hmq7x\") pod \"memcached-0\" (UID: \"58ff3546-162f-4796-961b-2943d7465355\") " pod="openstack/memcached-0" Mar 08 00:43:21 crc kubenswrapper[4762]: I0308 00:43:21.208053 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 08 00:43:22 crc kubenswrapper[4762]: I0308 00:43:22.532464 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" event={"ID":"e9885b07-ca68-486f-b6ec-995cde630f8a","Type":"ContainerStarted","Data":"567cbdeee3c6da14e70e634b1d533804b847abc51920438b5893c118d4ff5588"} Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.099097 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.102465 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.104383 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-lklqk" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.132799 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.211907 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwmj\" (UniqueName: \"kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj\") pod \"kube-state-metrics-0\" (UID: \"24ad8db6-2015-4ebf-847c-64a8c4a548d3\") " pod="openstack/kube-state-metrics-0" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.313407 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwmj\" (UniqueName: \"kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj\") pod \"kube-state-metrics-0\" (UID: \"24ad8db6-2015-4ebf-847c-64a8c4a548d3\") " pod="openstack/kube-state-metrics-0" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.330496 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwmj\" (UniqueName: \"kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj\") pod \"kube-state-metrics-0\" (UID: \"24ad8db6-2015-4ebf-847c-64a8c4a548d3\") " pod="openstack/kube-state-metrics-0" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.429510 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.675734 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98"] Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.676987 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.679886 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.680057 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-tgnml" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.699072 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98"] Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.820949 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljzr\" (UniqueName: \"kubernetes.io/projected/5e5e70a6-f33a-4930-9699-83dfa11cf98d-kube-api-access-lljzr\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.821063 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e5e70a6-f33a-4930-9699-83dfa11cf98d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.924546 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljzr\" (UniqueName: \"kubernetes.io/projected/5e5e70a6-f33a-4930-9699-83dfa11cf98d-kube-api-access-lljzr\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.924603 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e5e70a6-f33a-4930-9699-83dfa11cf98d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.929588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5e5e70a6-f33a-4930-9699-83dfa11cf98d-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.959983 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljzr\" (UniqueName: \"kubernetes.io/projected/5e5e70a6-f33a-4930-9699-83dfa11cf98d-kube-api-access-lljzr\") pod \"observability-ui-dashboards-66cbf594b5-rjj98\" (UID: \"5e5e70a6-f33a-4930-9699-83dfa11cf98d\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:23 crc kubenswrapper[4762]: I0308 00:43:23.993378 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.002596 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-859d87bf79-sbgvn"] Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.003879 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.032289 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859d87bf79-sbgvn"] Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.127790 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.127850 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-trusted-ca-bundle\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.127942 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-kube-api-access-mlprj\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.127972 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-oauth-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.127994 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-oauth-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.128022 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-service-ca\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.128059 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.229808 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.229877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.229905 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-trusted-ca-bundle\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.229957 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-kube-api-access-mlprj\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.229988 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-oauth-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.230008 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-oauth-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.230036 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-service-ca\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.230691 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.230985 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-service-ca\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.230995 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-trusted-ca-bundle\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.231228 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-oauth-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.254326 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-oauth-config\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.262225 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-console-serving-cert\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.284882 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.302694 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlprj\" (UniqueName: \"kubernetes.io/projected/9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8-kube-api-access-mlprj\") pod \"console-859d87bf79-sbgvn\" (UID: \"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8\") " pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.332875 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.372643 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.403000 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.403213 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.406675 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.406871 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gpv7p" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.407065 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.407351 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.407412 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.407109 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.425983 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471387 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471435 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471459 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471476 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471511 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471556 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471575 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq6hr\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471590 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.471624 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.572940 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.572999 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573022 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573039 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573078 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573148 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573167 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573191 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq6hr\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573208 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.573246 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.574126 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.574672 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.575661 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.575815 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.579710 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.580296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.583485 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.584948 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.586388 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.592979 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq6hr\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.618931 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:24 crc kubenswrapper[4762]: I0308 00:43:24.731314 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.003657 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.220837 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kkckg"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.221876 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.224601 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.225182 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.226605 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-b77wl" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.262185 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kkckg"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.282225 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ffhbt"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.283857 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.291610 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ffhbt"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.303210 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-log-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.303283 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcab1df7-ddcc-4784-8a49-0be5161590f2-scripts\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.303344 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.303880 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-ovn-controller-tls-certs\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.304000 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-combined-ca-bundle\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.304086 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmmpl\" (UniqueName: \"kubernetes.io/projected/bcab1df7-ddcc-4784-8a49-0be5161590f2-kube-api-access-wmmpl\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.304145 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406124 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-lib\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406173 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-combined-ca-bundle\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406239 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-run\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406259 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmmpl\" (UniqueName: \"kubernetes.io/projected/bcab1df7-ddcc-4784-8a49-0be5161590f2-kube-api-access-wmmpl\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406281 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406341 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2292\" (UniqueName: \"kubernetes.io/projected/21774b04-29d4-4687-b650-87eed791f3e8-kube-api-access-x2292\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406362 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-log-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406378 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-etc-ovs\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406410 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcab1df7-ddcc-4784-8a49-0be5161590f2-scripts\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406436 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406460 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-ovn-controller-tls-certs\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406481 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21774b04-29d4-4687-b650-87eed791f3e8-scripts\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.406501 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-log\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.407462 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.407604 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-log-ovn\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.409331 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bcab1df7-ddcc-4784-8a49-0be5161590f2-scripts\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.409472 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bcab1df7-ddcc-4784-8a49-0be5161590f2-var-run\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.415868 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-ovn-controller-tls-certs\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.415893 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcab1df7-ddcc-4784-8a49-0be5161590f2-combined-ca-bundle\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.422373 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmmpl\" (UniqueName: \"kubernetes.io/projected/bcab1df7-ddcc-4784-8a49-0be5161590f2-kube-api-access-wmmpl\") pod \"ovn-controller-kkckg\" (UID: \"bcab1df7-ddcc-4784-8a49-0be5161590f2\") " pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507682 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21774b04-29d4-4687-b650-87eed791f3e8-scripts\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507737 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-log\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507777 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-lib\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507834 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-run\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507935 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2292\" (UniqueName: \"kubernetes.io/projected/21774b04-29d4-4687-b650-87eed791f3e8-kube-api-access-x2292\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.507965 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-etc-ovs\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.508042 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-log\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.508090 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-lib\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.508233 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-etc-ovs\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.508637 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/21774b04-29d4-4687-b650-87eed791f3e8-var-run\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.510509 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/21774b04-29d4-4687-b650-87eed791f3e8-scripts\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.524019 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2292\" (UniqueName: \"kubernetes.io/projected/21774b04-29d4-4687-b650-87eed791f3e8-kube-api-access-x2292\") pod \"ovn-controller-ovs-ffhbt\" (UID: \"21774b04-29d4-4687-b650-87eed791f3e8\") " pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.536922 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.596462 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.912827 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.914342 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.920116 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wg47c" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.920378 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.923031 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.923624 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.923469 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 08 00:43:26 crc kubenswrapper[4762]: I0308 00:43:26.927318 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017414 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017487 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017574 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017613 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017637 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017705 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.017793 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7mnm\" (UniqueName: \"kubernetes.io/projected/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-kube-api-access-c7mnm\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.120540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.120658 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.120813 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.120892 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.120926 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.121090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.121163 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.121154 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.121242 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7mnm\" (UniqueName: \"kubernetes.io/projected/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-kube-api-access-c7mnm\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.121919 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.123225 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-config\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.123632 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.125391 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.129126 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.130341 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.152270 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7mnm\" (UniqueName: \"kubernetes.io/projected/c3e5a947-ec53-4871-a0d8-c51ca14cf8c4-kube-api-access-c7mnm\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.174582 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4\") " pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:27 crc kubenswrapper[4762]: I0308 00:43:27.249563 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.209998 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.228175 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.233115 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.239379 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.239792 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.240407 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-bv4rs" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.240531 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.279409 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72w2s\" (UniqueName: \"kubernetes.io/projected/c232bd40-b650-4530-8df3-2fd1c3f57398-kube-api-access-72w2s\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.279735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.279860 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.279930 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.280017 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.280070 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-config\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.280129 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.280228 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382096 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-config\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382146 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382221 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382262 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72w2s\" (UniqueName: \"kubernetes.io/projected/c232bd40-b650-4530-8df3-2fd1c3f57398-kube-api-access-72w2s\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382325 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382367 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382401 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.382442 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.383209 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-config\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.383288 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c232bd40-b650-4530-8df3-2fd1c3f57398-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.383333 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.383901 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.390489 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.390691 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.391672 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c232bd40-b650-4530-8df3-2fd1c3f57398-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.399507 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72w2s\" (UniqueName: \"kubernetes.io/projected/c232bd40-b650-4530-8df3-2fd1c3f57398-kube-api-access-72w2s\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.407982 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c232bd40-b650-4530-8df3-2fd1c3f57398\") " pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: W0308 00:43:30.472035 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda759d745_52d2_48f8_9848_172ace2b5120.slice/crio-e6c11f244b0f35c7f53574825260a487dc5bcdfea5a34a26b892024e79f85d90 WatchSource:0}: Error finding container e6c11f244b0f35c7f53574825260a487dc5bcdfea5a34a26b892024e79f85d90: Status 404 returned error can't find the container with id e6c11f244b0f35c7f53574825260a487dc5bcdfea5a34a26b892024e79f85d90 Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.570014 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.598063 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerStarted","Data":"e6c11f244b0f35c7f53574825260a487dc5bcdfea5a34a26b892024e79f85d90"} Mar 08 00:43:30 crc kubenswrapper[4762]: I0308 00:43:30.895777 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 08 00:43:31 crc kubenswrapper[4762]: W0308 00:43:31.421860 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d82ab27_d2d8_486a_8514_2af542e4223a.slice/crio-7ef242a4d9b34142b90dcd42ac4dd21b8fb4a2f38b7d16409bc100211dd32e9e WatchSource:0}: Error finding container 7ef242a4d9b34142b90dcd42ac4dd21b8fb4a2f38b7d16409bc100211dd32e9e: Status 404 returned error can't find the container with id 7ef242a4d9b34142b90dcd42ac4dd21b8fb4a2f38b7d16409bc100211dd32e9e Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.446594 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.446780 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xd79l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-rlt44_openstack(6bf3b5a7-e7aa-447c-8e6b-983ae3862d37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.447996 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" podUID="6bf3b5a7-e7aa-447c-8e6b-983ae3862d37" Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.454223 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.454413 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-knwrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-66f2g_openstack(6bee0137-07dd-45cb-bd57-7fe6246f422a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:43:31 crc kubenswrapper[4762]: E0308 00:43:31.455805 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" podUID="6bee0137-07dd-45cb-bd57-7fe6246f422a" Mar 08 00:43:31 crc kubenswrapper[4762]: I0308 00:43:31.613170 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerStarted","Data":"7ef242a4d9b34142b90dcd42ac4dd21b8fb4a2f38b7d16409bc100211dd32e9e"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.085127 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.107432 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.111590 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd79l\" (UniqueName: \"kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l\") pod \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.111684 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config\") pod \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.111922 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc\") pod \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\" (UID: \"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37\") " Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.112719 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37" (UID: "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.113747 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config" (OuterVolumeSpecName: "config") pod "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37" (UID: "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.122030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l" (OuterVolumeSpecName: "kube-api-access-xd79l") pod "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37" (UID: "6bf3b5a7-e7aa-447c-8e6b-983ae3862d37"). InnerVolumeSpecName "kube-api-access-xd79l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.214089 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knwrq\" (UniqueName: \"kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq\") pod \"6bee0137-07dd-45cb-bd57-7fe6246f422a\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.214175 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config\") pod \"6bee0137-07dd-45cb-bd57-7fe6246f422a\" (UID: \"6bee0137-07dd-45cb-bd57-7fe6246f422a\") " Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.215044 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.215062 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd79l\" (UniqueName: \"kubernetes.io/projected/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-kube-api-access-xd79l\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.215073 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.215598 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config" (OuterVolumeSpecName: "config") pod "6bee0137-07dd-45cb-bd57-7fe6246f422a" (UID: "6bee0137-07dd-45cb-bd57-7fe6246f422a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.218395 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq" (OuterVolumeSpecName: "kube-api-access-knwrq") pod "6bee0137-07dd-45cb-bd57-7fe6246f422a" (UID: "6bee0137-07dd-45cb-bd57-7fe6246f422a"). InnerVolumeSpecName "kube-api-access-knwrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.270784 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.279004 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.316988 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knwrq\" (UniqueName: \"kubernetes.io/projected/6bee0137-07dd-45cb-bd57-7fe6246f422a-kube-api-access-knwrq\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.317020 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bee0137-07dd-45cb-bd57-7fe6246f422a-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.447664 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kkckg"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.457818 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.465509 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:43:32 crc kubenswrapper[4762]: W0308 00:43:32.480355 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbcab1df7_ddcc_4784_8a49_0be5161590f2.slice/crio-ec35f7929ffc42bdc770eb23cbe8960741f87f776d84bf0f4a31bd3071ebf660 WatchSource:0}: Error finding container ec35f7929ffc42bdc770eb23cbe8960741f87f776d84bf0f4a31bd3071ebf660: Status 404 returned error can't find the container with id ec35f7929ffc42bdc770eb23cbe8960741f87f776d84bf0f4a31bd3071ebf660 Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.620228 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" event={"ID":"6bf3b5a7-e7aa-447c-8e6b-983ae3862d37","Type":"ContainerDied","Data":"c0b1efded8498bde1942f2d747b5a3281f6a16de8a34dc294edf74aeafb7ea1d"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.620317 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-rlt44" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.633672 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kkckg" event={"ID":"bcab1df7-ddcc-4784-8a49-0be5161590f2","Type":"ContainerStarted","Data":"ec35f7929ffc42bdc770eb23cbe8960741f87f776d84bf0f4a31bd3071ebf660"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.635975 4762 generic.go:334] "Generic (PLEG): container finished" podID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerID="bcbb7b01f5895ce9d8bded95677e1e5e4b70c39a75c586a15b30915e9cf873e2" exitCode=0 Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.636047 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" event={"ID":"5382434b-f08f-479e-aa77-d5e0d436c6eb","Type":"ContainerDied","Data":"bcbb7b01f5895ce9d8bded95677e1e5e4b70c39a75c586a15b30915e9cf873e2"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.636074 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" event={"ID":"5382434b-f08f-479e-aa77-d5e0d436c6eb","Type":"ContainerStarted","Data":"879141bf6cfbea916392a90846570da61728aa0c017618c408fbc1befa42e92b"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.637269 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerStarted","Data":"c4d7180f7793f78335eaf0ab2d0ccdfa1fb03762f9b6bb2361d29db7b975b222"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.638812 4762 generic.go:334] "Generic (PLEG): container finished" podID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerID="90aad9fd7085a7cb69c7a807f4a63406ccc2826255ac1c00243b3a69b03a8c4b" exitCode=0 Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.649314 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" event={"ID":"e9885b07-ca68-486f-b6ec-995cde630f8a","Type":"ContainerDied","Data":"90aad9fd7085a7cb69c7a807f4a63406ccc2826255ac1c00243b3a69b03a8c4b"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.659776 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"58ff3546-162f-4796-961b-2943d7465355","Type":"ContainerStarted","Data":"181f685b871182fedd274f89dfa9eada2a98ad4b6bf438859651eca57d63e8a5"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.661427 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerStarted","Data":"e5aee166a9397e4d5168552c3f59e69a9c80d19b24c138272102545b632163ca"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.665706 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" event={"ID":"6bee0137-07dd-45cb-bd57-7fe6246f422a","Type":"ContainerDied","Data":"15a29596eeb9d3ed347cceecb86b57a14c0c36d9b86c0bbad7cbbe5004cbff8a"} Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.665847 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-66f2g" Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.761743 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98"] Mar 08 00:43:32 crc kubenswrapper[4762]: W0308 00:43:32.799247 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02437d1d_337c_4013_92e1_69125f57e03f.slice/crio-dd5e5c5182ac68bc25b470fb5ac90373ef6b19e16386618d58c133b839bcf349 WatchSource:0}: Error finding container dd5e5c5182ac68bc25b470fb5ac90373ef6b19e16386618d58c133b839bcf349: Status 404 returned error can't find the container with id dd5e5c5182ac68bc25b470fb5ac90373ef6b19e16386618d58c133b839bcf349 Mar 08 00:43:32 crc kubenswrapper[4762]: W0308 00:43:32.804455 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24ad8db6_2015_4ebf_847c_64a8c4a548d3.slice/crio-b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4 WatchSource:0}: Error finding container b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4: Status 404 returned error can't find the container with id b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4 Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.824482 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.846712 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-rlt44"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.878034 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-859d87bf79-sbgvn"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.887934 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.904665 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.918169 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:32 crc kubenswrapper[4762]: I0308 00:43:32.925957 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-66f2g"] Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.073567 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.274656 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bee0137-07dd-45cb-bd57-7fe6246f422a" path="/var/lib/kubelet/pods/6bee0137-07dd-45cb-bd57-7fe6246f422a/volumes" Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.275091 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf3b5a7-e7aa-447c-8e6b-983ae3862d37" path="/var/lib/kubelet/pods/6bf3b5a7-e7aa-447c-8e6b-983ae3862d37/volumes" Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.580082 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.681169 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ffhbt"] Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.684526 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d87bf79-sbgvn" event={"ID":"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8","Type":"ContainerStarted","Data":"253eee1d691e49435b3eeddb01d81f9fd263c9c3274164aea2b282c871c9cd04"} Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.686363 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerStarted","Data":"dd5e5c5182ac68bc25b470fb5ac90373ef6b19e16386618d58c133b839bcf349"} Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.687706 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" event={"ID":"5e5e70a6-f33a-4930-9699-83dfa11cf98d","Type":"ContainerStarted","Data":"b1c85f1ae4e69aa637a260569c483e8770bf93d0d149608807bb623a4f67c47b"} Mar 08 00:43:33 crc kubenswrapper[4762]: I0308 00:43:33.689215 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24ad8db6-2015-4ebf-847c-64a8c4a548d3","Type":"ContainerStarted","Data":"b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4"} Mar 08 00:43:34 crc kubenswrapper[4762]: W0308 00:43:34.099718 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e5a947_ec53_4871_a0d8_c51ca14cf8c4.slice/crio-dbecc619b893f96baef11469b3c3d4403b1c71240b9a95ea5a52c290341a4538 WatchSource:0}: Error finding container dbecc619b893f96baef11469b3c3d4403b1c71240b9a95ea5a52c290341a4538: Status 404 returned error can't find the container with id dbecc619b893f96baef11469b3c3d4403b1c71240b9a95ea5a52c290341a4538 Mar 08 00:43:34 crc kubenswrapper[4762]: W0308 00:43:34.102528 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc232bd40_b650_4530_8df3_2fd1c3f57398.slice/crio-947f9440499f97714024fe3ffaaca57a1d2a75a9aaf033f1123942822297a729 WatchSource:0}: Error finding container 947f9440499f97714024fe3ffaaca57a1d2a75a9aaf033f1123942822297a729: Status 404 returned error can't find the container with id 947f9440499f97714024fe3ffaaca57a1d2a75a9aaf033f1123942822297a729 Mar 08 00:43:34 crc kubenswrapper[4762]: W0308 00:43:34.113884 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21774b04_29d4_4687_b650_87eed791f3e8.slice/crio-0e9f6e2a8f536527c1098cd602965e09c6ba656dd940d85d849cc5bdde49a414 WatchSource:0}: Error finding container 0e9f6e2a8f536527c1098cd602965e09c6ba656dd940d85d849cc5bdde49a414: Status 404 returned error can't find the container with id 0e9f6e2a8f536527c1098cd602965e09c6ba656dd940d85d849cc5bdde49a414 Mar 08 00:43:34 crc kubenswrapper[4762]: I0308 00:43:34.697584 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4","Type":"ContainerStarted","Data":"dbecc619b893f96baef11469b3c3d4403b1c71240b9a95ea5a52c290341a4538"} Mar 08 00:43:34 crc kubenswrapper[4762]: I0308 00:43:34.698968 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffhbt" event={"ID":"21774b04-29d4-4687-b650-87eed791f3e8","Type":"ContainerStarted","Data":"0e9f6e2a8f536527c1098cd602965e09c6ba656dd940d85d849cc5bdde49a414"} Mar 08 00:43:34 crc kubenswrapper[4762]: I0308 00:43:34.699832 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c232bd40-b650-4530-8df3-2fd1c3f57398","Type":"ContainerStarted","Data":"947f9440499f97714024fe3ffaaca57a1d2a75a9aaf033f1123942822297a729"} Mar 08 00:43:36 crc kubenswrapper[4762]: E0308 00:43:36.087810 4762 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Mar 08 00:43:36 crc kubenswrapper[4762]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/e9885b07-ca68-486f-b6ec-995cde630f8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 08 00:43:36 crc kubenswrapper[4762]: > podSandboxID="567cbdeee3c6da14e70e634b1d533804b847abc51920438b5893c118d4ff5588" Mar 08 00:43:36 crc kubenswrapper[4762]: E0308 00:43:36.088325 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 00:43:36 crc kubenswrapper[4762]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h4rds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-lh8v4_openstack(e9885b07-ca68-486f-b6ec-995cde630f8a): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/e9885b07-ca68-486f-b6ec-995cde630f8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Mar 08 00:43:36 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 00:43:36 crc kubenswrapper[4762]: E0308 00:43:36.089823 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/e9885b07-ca68-486f-b6ec-995cde630f8a/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.513792 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b987k"] Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.515854 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.517232 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.539592 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b987k"] Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.582461 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovn-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.582581 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovs-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.582875 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63452061-1f2b-471a-bb81-e71fa2249560-config\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.583005 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7ff\" (UniqueName: \"kubernetes.io/projected/63452061-1f2b-471a-bb81-e71fa2249560-kube-api-access-dc7ff\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.583053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.583082 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-combined-ca-bundle\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.684821 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovn-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.684889 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovs-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.684936 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63452061-1f2b-471a-bb81-e71fa2249560-config\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.684973 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc7ff\" (UniqueName: \"kubernetes.io/projected/63452061-1f2b-471a-bb81-e71fa2249560-kube-api-access-dc7ff\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.684994 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.685010 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-combined-ca-bundle\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.685225 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovs-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.685258 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/63452061-1f2b-471a-bb81-e71fa2249560-ovn-rundir\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.685939 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63452061-1f2b-471a-bb81-e71fa2249560-config\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.688579 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-combined-ca-bundle\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.689512 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/63452061-1f2b-471a-bb81-e71fa2249560-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.703378 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc7ff\" (UniqueName: \"kubernetes.io/projected/63452061-1f2b-471a-bb81-e71fa2249560-kube-api-access-dc7ff\") pod \"ovn-controller-metrics-b987k\" (UID: \"63452061-1f2b-471a-bb81-e71fa2249560\") " pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.851446 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b987k" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.868025 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.915391 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.916999 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.925268 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.934314 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.993720 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.993794 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.993859 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvrcs\" (UniqueName: \"kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:39 crc kubenswrapper[4762]: I0308 00:43:39.993892 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.066978 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.096595 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.096663 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.096732 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvrcs\" (UniqueName: \"kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.096779 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.097559 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.097631 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.098778 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.099162 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.099607 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.102251 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.124464 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.125422 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvrcs\" (UniqueName: \"kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs\") pod \"dnsmasq-dns-6bc7876d45-zqvc6\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.197991 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdvvl\" (UniqueName: \"kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.198042 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.198124 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.198162 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.198199 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.243462 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.302253 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.302456 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdvvl\" (UniqueName: \"kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.302481 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.302539 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.302562 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.305230 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.306013 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.306179 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.306202 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.322686 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdvvl\" (UniqueName: \"kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl\") pod \"dnsmasq-dns-8554648995-b9mmd\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.420052 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.752715 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" event={"ID":"5382434b-f08f-479e-aa77-d5e0d436c6eb","Type":"ContainerStarted","Data":"55bf57f36a2af1d7a5359cff107f1e73c4e9445aa0d95b7f05f37c308455465c"} Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.752975 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="dnsmasq-dns" containerID="cri-o://55bf57f36a2af1d7a5359cff107f1e73c4e9445aa0d95b7f05f37c308455465c" gracePeriod=10 Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.753354 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.765656 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d87bf79-sbgvn" event={"ID":"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8","Type":"ContainerStarted","Data":"cc56921b87d3b48c9754bb0e5a8c075000c33d418a4763fd63d81afdbf0f1207"} Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.778260 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" podStartSLOduration=24.778237611 podStartE2EDuration="24.778237611s" podCreationTimestamp="2026-03-08 00:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:43:40.773094873 +0000 UTC m=+1242.247239217" watchObservedRunningTime="2026-03-08 00:43:40.778237611 +0000 UTC m=+1242.252381965" Mar 08 00:43:40 crc kubenswrapper[4762]: I0308 00:43:40.804057 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-859d87bf79-sbgvn" podStartSLOduration=17.804027444 podStartE2EDuration="17.804027444s" podCreationTimestamp="2026-03-08 00:43:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:43:40.792197165 +0000 UTC m=+1242.266341549" watchObservedRunningTime="2026-03-08 00:43:40.804027444 +0000 UTC m=+1242.278171818" Mar 08 00:43:41 crc kubenswrapper[4762]: I0308 00:43:41.781044 4762 generic.go:334] "Generic (PLEG): container finished" podID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerID="55bf57f36a2af1d7a5359cff107f1e73c4e9445aa0d95b7f05f37c308455465c" exitCode=0 Mar 08 00:43:41 crc kubenswrapper[4762]: I0308 00:43:41.781093 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" event={"ID":"5382434b-f08f-479e-aa77-d5e0d436c6eb","Type":"ContainerDied","Data":"55bf57f36a2af1d7a5359cff107f1e73c4e9445aa0d95b7f05f37c308455465c"} Mar 08 00:43:42 crc kubenswrapper[4762]: I0308 00:43:42.799483 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerStarted","Data":"da1f3f1b9b29fd8f8086fd577e095c3fa0723111de086ccb0c46eea6316e3241"} Mar 08 00:43:42 crc kubenswrapper[4762]: I0308 00:43:42.804638 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerStarted","Data":"7bdb2ea1f65eb7942f2f7e3865b6d5415488a631bf1ade298c09336b2f2e6d96"} Mar 08 00:43:43 crc kubenswrapper[4762]: I0308 00:43:43.377535 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b987k"] Mar 08 00:43:43 crc kubenswrapper[4762]: W0308 00:43:43.447334 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63452061_1f2b_471a_bb81_e71fa2249560.slice/crio-d62ba65194f2c17b4174242f78443b456867db6e1939db7b1eec6eca2501cdad WatchSource:0}: Error finding container d62ba65194f2c17b4174242f78443b456867db6e1939db7b1eec6eca2501cdad: Status 404 returned error can't find the container with id d62ba65194f2c17b4174242f78443b456867db6e1939db7b1eec6eca2501cdad Mar 08 00:43:43 crc kubenswrapper[4762]: I0308 00:43:43.447893 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.559676 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.626930 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc\") pod \"5382434b-f08f-479e-aa77-d5e0d436c6eb\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.627044 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxnd\" (UniqueName: \"kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd\") pod \"5382434b-f08f-479e-aa77-d5e0d436c6eb\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.627123 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config\") pod \"5382434b-f08f-479e-aa77-d5e0d436c6eb\" (UID: \"5382434b-f08f-479e-aa77-d5e0d436c6eb\") " Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.637289 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd" (OuterVolumeSpecName: "kube-api-access-lzxnd") pod "5382434b-f08f-479e-aa77-d5e0d436c6eb" (UID: "5382434b-f08f-479e-aa77-d5e0d436c6eb"). InnerVolumeSpecName "kube-api-access-lzxnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.672486 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config" (OuterVolumeSpecName: "config") pod "5382434b-f08f-479e-aa77-d5e0d436c6eb" (UID: "5382434b-f08f-479e-aa77-d5e0d436c6eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.684791 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5382434b-f08f-479e-aa77-d5e0d436c6eb" (UID: "5382434b-f08f-479e-aa77-d5e0d436c6eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.728852 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxnd\" (UniqueName: \"kubernetes.io/projected/5382434b-f08f-479e-aa77-d5e0d436c6eb-kube-api-access-lzxnd\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.728888 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.728899 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5382434b-f08f-479e-aa77-d5e0d436c6eb-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.814852 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b987k" event={"ID":"63452061-1f2b-471a-bb81-e71fa2249560","Type":"ContainerStarted","Data":"d62ba65194f2c17b4174242f78443b456867db6e1939db7b1eec6eca2501cdad"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.817915 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" event={"ID":"e9885b07-ca68-486f-b6ec-995cde630f8a","Type":"ContainerStarted","Data":"b8ce49c410a2b01f10c9059689e207b63f2424ea669360e954fd2d38f33e569b"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.818065 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="dnsmasq-dns" containerID="cri-o://b8ce49c410a2b01f10c9059689e207b63f2424ea669360e954fd2d38f33e569b" gracePeriod=10 Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.818128 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.820137 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerStarted","Data":"b94d9c36951363013692225f8f866cf70b106be4a55a722a215c26cd2d509e15"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.822850 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"58ff3546-162f-4796-961b-2943d7465355","Type":"ContainerStarted","Data":"fb4e459dad50a837b891827c7886723ff2481e99d2250fffd8c1e78123885c5c"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.822943 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.825561 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" event={"ID":"5e5e70a6-f33a-4930-9699-83dfa11cf98d","Type":"ContainerStarted","Data":"22969a3aa795fdecd13b532149cb5d4cb12f06373fa01676af7cb59c3e5994e5"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.827528 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerStarted","Data":"2d8bf60a889332a94b9a83a83035fb42c71135c015535ec062c080d3c7929f79"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.829532 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" event={"ID":"5382434b-f08f-479e-aa77-d5e0d436c6eb","Type":"ContainerDied","Data":"879141bf6cfbea916392a90846570da61728aa0c017618c408fbc1befa42e92b"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.829572 4762 scope.go:117] "RemoveContainer" containerID="55bf57f36a2af1d7a5359cff107f1e73c4e9445aa0d95b7f05f37c308455465c" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.829710 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mt6gt" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.837048 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" podStartSLOduration=18.296442314 podStartE2EDuration="27.837027138s" podCreationTimestamp="2026-03-08 00:43:16 +0000 UTC" firstStartedPulling="2026-03-08 00:43:22.043279545 +0000 UTC m=+1223.517423889" lastFinishedPulling="2026-03-08 00:43:31.583864369 +0000 UTC m=+1233.058008713" observedRunningTime="2026-03-08 00:43:43.832657185 +0000 UTC m=+1245.306801529" watchObservedRunningTime="2026-03-08 00:43:43.837027138 +0000 UTC m=+1245.311171482" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.873576 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.524911848 podStartE2EDuration="23.873556839s" podCreationTimestamp="2026-03-08 00:43:20 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.465364121 +0000 UTC m=+1233.939508465" lastFinishedPulling="2026-03-08 00:43:39.814009112 +0000 UTC m=+1241.288153456" observedRunningTime="2026-03-08 00:43:43.865967768 +0000 UTC m=+1245.340112112" watchObservedRunningTime="2026-03-08 00:43:43.873556839 +0000 UTC m=+1245.347701183" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.916445 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-rjj98" podStartSLOduration=13.772705126 podStartE2EDuration="20.916426423s" podCreationTimestamp="2026-03-08 00:43:23 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.793788971 +0000 UTC m=+1234.267933315" lastFinishedPulling="2026-03-08 00:43:39.937510268 +0000 UTC m=+1241.411654612" observedRunningTime="2026-03-08 00:43:43.914781102 +0000 UTC m=+1245.388925446" watchObservedRunningTime="2026-03-08 00:43:43.916426423 +0000 UTC m=+1245.390570767" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.940873 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:43.947082 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mt6gt"] Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.334108 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.334401 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.841305 4762 generic.go:334] "Generic (PLEG): container finished" podID="21774b04-29d4-4687-b650-87eed791f3e8" containerID="9db075d602af2f992acd1dbda186cd73348dd09fd32fe7a15d061e3369b9dea5" exitCode=0 Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.841946 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffhbt" event={"ID":"21774b04-29d4-4687-b650-87eed791f3e8","Type":"ContainerDied","Data":"9db075d602af2f992acd1dbda186cd73348dd09fd32fe7a15d061e3369b9dea5"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.844824 4762 generic.go:334] "Generic (PLEG): container finished" podID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerID="b8ce49c410a2b01f10c9059689e207b63f2424ea669360e954fd2d38f33e569b" exitCode=0 Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:44.845582 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" event={"ID":"e9885b07-ca68-486f-b6ec-995cde630f8a","Type":"ContainerDied","Data":"b8ce49c410a2b01f10c9059689e207b63f2424ea669360e954fd2d38f33e569b"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:45.278255 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" path="/var/lib/kubelet/pods/5382434b-f08f-479e-aa77-d5e0d436c6eb/volumes" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.527815 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.534220 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.611291 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:43:47 crc kubenswrapper[4762]: W0308 00:43:47.848361 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57550501_ec35_4064_8e34_d470df9c2721.slice/crio-bc4013e399e6acc7351cde5a46db028917ee92d07c595fa17d92325060c31a19 WatchSource:0}: Error finding container bc4013e399e6acc7351cde5a46db028917ee92d07c595fa17d92325060c31a19: Status 404 returned error can't find the container with id bc4013e399e6acc7351cde5a46db028917ee92d07c595fa17d92325060c31a19 Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.857044 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:47 crc kubenswrapper[4762]: W0308 00:43:47.870228 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9af8496_3ac1_4adc_b887_6fb0d2a02f16.slice/crio-d737121508a203691e02c6bb1722c8c9c5b32bb1ad28796a9091788505755d91 WatchSource:0}: Error finding container d737121508a203691e02c6bb1722c8c9c5b32bb1ad28796a9091788505755d91: Status 404 returned error can't find the container with id d737121508a203691e02c6bb1722c8c9c5b32bb1ad28796a9091788505755d91 Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.885353 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerStarted","Data":"c8094726bd70c9914115430c557943bb1dfd829b72cda14ae7a8bc2882dfe912"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.887781 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-b9mmd" event={"ID":"57550501-ec35-4064-8e34-d470df9c2721","Type":"ContainerStarted","Data":"bc4013e399e6acc7351cde5a46db028917ee92d07c595fa17d92325060c31a19"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.889886 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" event={"ID":"c9af8496-3ac1-4adc-b887-6fb0d2a02f16","Type":"ContainerStarted","Data":"d737121508a203691e02c6bb1722c8c9c5b32bb1ad28796a9091788505755d91"} Mar 08 00:43:47 crc kubenswrapper[4762]: I0308 00:43:47.895192 4762 scope.go:117] "RemoveContainer" containerID="bcbb7b01f5895ce9d8bded95677e1e5e4b70c39a75c586a15b30915e9cf873e2" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.277225 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.369400 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config\") pod \"e9885b07-ca68-486f-b6ec-995cde630f8a\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.369528 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc\") pod \"e9885b07-ca68-486f-b6ec-995cde630f8a\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.369587 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4rds\" (UniqueName: \"kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds\") pod \"e9885b07-ca68-486f-b6ec-995cde630f8a\" (UID: \"e9885b07-ca68-486f-b6ec-995cde630f8a\") " Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.377771 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds" (OuterVolumeSpecName: "kube-api-access-h4rds") pod "e9885b07-ca68-486f-b6ec-995cde630f8a" (UID: "e9885b07-ca68-486f-b6ec-995cde630f8a"). InnerVolumeSpecName "kube-api-access-h4rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.471236 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4rds\" (UniqueName: \"kubernetes.io/projected/e9885b07-ca68-486f-b6ec-995cde630f8a-kube-api-access-h4rds\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.479553 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e9885b07-ca68-486f-b6ec-995cde630f8a" (UID: "e9885b07-ca68-486f-b6ec-995cde630f8a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.490317 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config" (OuterVolumeSpecName: "config") pod "e9885b07-ca68-486f-b6ec-995cde630f8a" (UID: "e9885b07-ca68-486f-b6ec-995cde630f8a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.573888 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.574252 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e9885b07-ca68-486f-b6ec-995cde630f8a-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.919644 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.920327 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-lh8v4" event={"ID":"e9885b07-ca68-486f-b6ec-995cde630f8a","Type":"ContainerDied","Data":"567cbdeee3c6da14e70e634b1d533804b847abc51920438b5893c118d4ff5588"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.920471 4762 scope.go:117] "RemoveContainer" containerID="b8ce49c410a2b01f10c9059689e207b63f2424ea669360e954fd2d38f33e569b" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.929925 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c232bd40-b650-4530-8df3-2fd1c3f57398","Type":"ContainerStarted","Data":"2d1f50b550c692de5898bce2fc87b30ac89befd06d511616c6360f6417b34c5c"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.933567 4762 generic.go:334] "Generic (PLEG): container finished" podID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerID="8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6" exitCode=0 Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.933619 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" event={"ID":"c9af8496-3ac1-4adc-b887-6fb0d2a02f16","Type":"ContainerDied","Data":"8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.936899 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kkckg" event={"ID":"bcab1df7-ddcc-4784-8a49-0be5161590f2","Type":"ContainerStarted","Data":"5df0204ba11ef924fda0dfea6a0b25628373221352d3827226e017be0d93f5b0"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.937033 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-kkckg" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.944809 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4","Type":"ContainerStarted","Data":"b60534fabdded1240f645a37d304e4967c570110f7e051e6e36f6cd8e1737688"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.949406 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.951484 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffhbt" event={"ID":"21774b04-29d4-4687-b650-87eed791f3e8","Type":"ContainerStarted","Data":"4f0cf06195583db5cf4d900b32a0c134804682e1105657a0777d4564bc2b7d46"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.951518 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ffhbt" event={"ID":"21774b04-29d4-4687-b650-87eed791f3e8","Type":"ContainerStarted","Data":"ce1db3672b3418b213b11307f022e0eba53411c7411411ca8bfd0ea84cb867e3"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.951561 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.951589 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.952364 4762 scope.go:117] "RemoveContainer" containerID="90aad9fd7085a7cb69c7a807f4a63406ccc2826255ac1c00243b3a69b03a8c4b" Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.953417 4762 generic.go:334] "Generic (PLEG): container finished" podID="57550501-ec35-4064-8e34-d470df9c2721" containerID="cdbafd6205dbc13ecda8d78e7c47961fa770ff4ae6944c9eb67eeb9be9eb26db" exitCode=0 Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.953444 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-b9mmd" event={"ID":"57550501-ec35-4064-8e34-d470df9c2721","Type":"ContainerDied","Data":"cdbafd6205dbc13ecda8d78e7c47961fa770ff4ae6944c9eb67eeb9be9eb26db"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.957505 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-lh8v4"] Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.995380 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24ad8db6-2015-4ebf-847c-64a8c4a548d3","Type":"ContainerStarted","Data":"d3aa62d5919c60e110b67b582ffbc719a3bee2abee9348dfc2f4b323d33bec97"} Mar 08 00:43:48 crc kubenswrapper[4762]: I0308 00:43:48.996020 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 08 00:43:49 crc kubenswrapper[4762]: I0308 00:43:49.004260 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-kkckg" podStartSLOduration=15.595242782 podStartE2EDuration="23.004240327s" podCreationTimestamp="2026-03-08 00:43:26 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.483150923 +0000 UTC m=+1233.957295267" lastFinishedPulling="2026-03-08 00:43:39.892148468 +0000 UTC m=+1241.366292812" observedRunningTime="2026-03-08 00:43:48.99448643 +0000 UTC m=+1250.468630784" watchObservedRunningTime="2026-03-08 00:43:49.004240327 +0000 UTC m=+1250.478384671" Mar 08 00:43:49 crc kubenswrapper[4762]: I0308 00:43:49.167414 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.849023878 podStartE2EDuration="26.167369299s" podCreationTimestamp="2026-03-08 00:43:23 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.813770609 +0000 UTC m=+1234.287914953" lastFinishedPulling="2026-03-08 00:43:48.13211603 +0000 UTC m=+1249.606260374" observedRunningTime="2026-03-08 00:43:49.117314426 +0000 UTC m=+1250.591458770" watchObservedRunningTime="2026-03-08 00:43:49.167369299 +0000 UTC m=+1250.641513653" Mar 08 00:43:49 crc kubenswrapper[4762]: I0308 00:43:49.204158 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ffhbt" podStartSLOduration=17.467492278 podStartE2EDuration="23.204136417s" podCreationTimestamp="2026-03-08 00:43:26 +0000 UTC" firstStartedPulling="2026-03-08 00:43:34.141310317 +0000 UTC m=+1235.615454661" lastFinishedPulling="2026-03-08 00:43:39.877954456 +0000 UTC m=+1241.352098800" observedRunningTime="2026-03-08 00:43:49.182728066 +0000 UTC m=+1250.656872410" watchObservedRunningTime="2026-03-08 00:43:49.204136417 +0000 UTC m=+1250.678280751" Mar 08 00:43:49 crc kubenswrapper[4762]: I0308 00:43:49.276292 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" path="/var/lib/kubelet/pods/e9885b07-ca68-486f-b6ec-995cde630f8a/volumes" Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.012303 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-b9mmd" event={"ID":"57550501-ec35-4064-8e34-d470df9c2721","Type":"ContainerStarted","Data":"a53cbb95f7bafa9c033a12779c97e00308b3a95122df1fcd45a1dc75552a93f9"} Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.012652 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.015870 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" event={"ID":"c9af8496-3ac1-4adc-b887-6fb0d2a02f16","Type":"ContainerStarted","Data":"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66"} Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.015945 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.045500 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-b9mmd" podStartSLOduration=10.045482397 podStartE2EDuration="10.045482397s" podCreationTimestamp="2026-03-08 00:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:43:50.035743382 +0000 UTC m=+1251.509887746" watchObservedRunningTime="2026-03-08 00:43:50.045482397 +0000 UTC m=+1251.519626741" Mar 08 00:43:50 crc kubenswrapper[4762]: I0308 00:43:50.061593 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" podStartSLOduration=11.061573828 podStartE2EDuration="11.061573828s" podCreationTimestamp="2026-03-08 00:43:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:43:50.05244628 +0000 UTC m=+1251.526590624" watchObservedRunningTime="2026-03-08 00:43:50.061573828 +0000 UTC m=+1251.535718182" Mar 08 00:43:51 crc kubenswrapper[4762]: I0308 00:43:51.034901 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerID="b94d9c36951363013692225f8f866cf70b106be4a55a722a215c26cd2d509e15" exitCode=0 Mar 08 00:43:51 crc kubenswrapper[4762]: I0308 00:43:51.035009 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerDied","Data":"b94d9c36951363013692225f8f866cf70b106be4a55a722a215c26cd2d509e15"} Mar 08 00:43:51 crc kubenswrapper[4762]: I0308 00:43:51.039191 4762 generic.go:334] "Generic (PLEG): container finished" podID="f50a5390-b172-470a-bcfd-161e360d90db" containerID="2d8bf60a889332a94b9a83a83035fb42c71135c015535ec062c080d3c7929f79" exitCode=0 Mar 08 00:43:51 crc kubenswrapper[4762]: I0308 00:43:51.039259 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerDied","Data":"2d8bf60a889332a94b9a83a83035fb42c71135c015535ec062c080d3c7929f79"} Mar 08 00:43:51 crc kubenswrapper[4762]: I0308 00:43:51.209845 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.056692 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c3e5a947-ec53-4871-a0d8-c51ca14cf8c4","Type":"ContainerStarted","Data":"eff6e2d8697c5c7b4d94b70889b7442d8312c7a4e89a6709b12421c457f38a81"} Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.061195 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerStarted","Data":"64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5"} Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.066086 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerStarted","Data":"ab2d98bfe519b063c5ee46d89c20f223f11a485732e05ac73d9868a4128c5e19"} Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.069076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c232bd40-b650-4530-8df3-2fd1c3f57398","Type":"ContainerStarted","Data":"353e6864b280c56a5115db873191966f068e2bf34ac8640d1ba65c3691fa833b"} Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.070610 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b987k" event={"ID":"63452061-1f2b-471a-bb81-e71fa2249560","Type":"ContainerStarted","Data":"4cc6a471209086937faf3ee05acd5681259ae2d9a878756e76dda6669ef522d5"} Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.124568 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=9.663541064 podStartE2EDuration="28.124548353s" podCreationTimestamp="2026-03-08 00:43:25 +0000 UTC" firstStartedPulling="2026-03-08 00:43:34.104146678 +0000 UTC m=+1235.578291012" lastFinishedPulling="2026-03-08 00:43:52.565153957 +0000 UTC m=+1254.039298301" observedRunningTime="2026-03-08 00:43:53.088697951 +0000 UTC m=+1254.562842335" watchObservedRunningTime="2026-03-08 00:43:53.124548353 +0000 UTC m=+1254.598692697" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.129207 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.151771524 podStartE2EDuration="34.129181283s" podCreationTimestamp="2026-03-08 00:43:19 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.275000321 +0000 UTC m=+1233.749144665" lastFinishedPulling="2026-03-08 00:43:39.25241008 +0000 UTC m=+1240.726554424" observedRunningTime="2026-03-08 00:43:53.121315334 +0000 UTC m=+1254.595459678" watchObservedRunningTime="2026-03-08 00:43:53.129181283 +0000 UTC m=+1254.603325647" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.159697 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.573954566 podStartE2EDuration="24.159681031s" podCreationTimestamp="2026-03-08 00:43:29 +0000 UTC" firstStartedPulling="2026-03-08 00:43:34.104559639 +0000 UTC m=+1235.578703983" lastFinishedPulling="2026-03-08 00:43:52.690286094 +0000 UTC m=+1254.164430448" observedRunningTime="2026-03-08 00:43:53.149387208 +0000 UTC m=+1254.623531552" watchObservedRunningTime="2026-03-08 00:43:53.159681031 +0000 UTC m=+1254.633825375" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.180446 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=28.137429438 podStartE2EDuration="35.180430282s" podCreationTimestamp="2026-03-08 00:43:18 +0000 UTC" firstStartedPulling="2026-03-08 00:43:31.431436083 +0000 UTC m=+1232.905580427" lastFinishedPulling="2026-03-08 00:43:38.474436917 +0000 UTC m=+1239.948581271" observedRunningTime="2026-03-08 00:43:53.17313799 +0000 UTC m=+1254.647282354" watchObservedRunningTime="2026-03-08 00:43:53.180430282 +0000 UTC m=+1254.654574626" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.191952 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b987k" podStartSLOduration=5.060037542 podStartE2EDuration="14.191937743s" podCreationTimestamp="2026-03-08 00:43:39 +0000 UTC" firstStartedPulling="2026-03-08 00:43:43.455717679 +0000 UTC m=+1244.929862023" lastFinishedPulling="2026-03-08 00:43:52.58761788 +0000 UTC m=+1254.061762224" observedRunningTime="2026-03-08 00:43:53.188345383 +0000 UTC m=+1254.662489737" watchObservedRunningTime="2026-03-08 00:43:53.191937743 +0000 UTC m=+1254.666082087" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.339913 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.340121 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="dnsmasq-dns" containerID="cri-o://9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66" gracePeriod=10 Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363029 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:43:53 crc kubenswrapper[4762]: E0308 00:43:53.363323 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363338 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: E0308 00:43:53.363349 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="init" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363357 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="init" Mar 08 00:43:53 crc kubenswrapper[4762]: E0308 00:43:53.363367 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="init" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363373 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="init" Mar 08 00:43:53 crc kubenswrapper[4762]: E0308 00:43:53.363392 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363397 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363548 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9885b07-ca68-486f-b6ec-995cde630f8a" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.363560 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5382434b-f08f-479e-aa77-d5e0d436c6eb" containerName="dnsmasq-dns" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.367118 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.387501 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.442143 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.451066 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.451134 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.451231 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xd8r\" (UniqueName: \"kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.451259 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.451277 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.552696 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.552873 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xd8r\" (UniqueName: \"kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.552900 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.552921 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.552952 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.553770 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.554259 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.555153 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.556870 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.589608 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xd8r\" (UniqueName: \"kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r\") pod \"dnsmasq-dns-b8fbc5445-28hm7\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.683369 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.805775 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.861553 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config\") pod \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.861646 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb\") pod \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.861677 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvrcs\" (UniqueName: \"kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs\") pod \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.861737 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc\") pod \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\" (UID: \"c9af8496-3ac1-4adc-b887-6fb0d2a02f16\") " Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.868932 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs" (OuterVolumeSpecName: "kube-api-access-cvrcs") pod "c9af8496-3ac1-4adc-b887-6fb0d2a02f16" (UID: "c9af8496-3ac1-4adc-b887-6fb0d2a02f16"). InnerVolumeSpecName "kube-api-access-cvrcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.928891 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config" (OuterVolumeSpecName: "config") pod "c9af8496-3ac1-4adc-b887-6fb0d2a02f16" (UID: "c9af8496-3ac1-4adc-b887-6fb0d2a02f16"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.929114 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c9af8496-3ac1-4adc-b887-6fb0d2a02f16" (UID: "c9af8496-3ac1-4adc-b887-6fb0d2a02f16"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.947665 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c9af8496-3ac1-4adc-b887-6fb0d2a02f16" (UID: "c9af8496-3ac1-4adc-b887-6fb0d2a02f16"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.965748 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.965802 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.965816 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvrcs\" (UniqueName: \"kubernetes.io/projected/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-kube-api-access-cvrcs\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:53 crc kubenswrapper[4762]: I0308 00:43:53.965825 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c9af8496-3ac1-4adc-b887-6fb0d2a02f16-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.080470 4762 generic.go:334] "Generic (PLEG): container finished" podID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerID="9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66" exitCode=0 Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.081319 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.089728 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" event={"ID":"c9af8496-3ac1-4adc-b887-6fb0d2a02f16","Type":"ContainerDied","Data":"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66"} Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.089772 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-zqvc6" event={"ID":"c9af8496-3ac1-4adc-b887-6fb0d2a02f16","Type":"ContainerDied","Data":"d737121508a203691e02c6bb1722c8c9c5b32bb1ad28796a9091788505755d91"} Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.089790 4762 scope.go:117] "RemoveContainer" containerID="9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.121812 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.125984 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-zqvc6"] Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.140742 4762 scope.go:117] "RemoveContainer" containerID="8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.196977 4762 scope.go:117] "RemoveContainer" containerID="9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66" Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.198254 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66\": container with ID starting with 9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66 not found: ID does not exist" containerID="9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.198315 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66"} err="failed to get container status \"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66\": rpc error: code = NotFound desc = could not find container \"9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66\": container with ID starting with 9abaf755f79b0ec2580f1b0e9f5183ec69fa84c123283402c5b142e9fdcf3c66 not found: ID does not exist" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.198352 4762 scope.go:117] "RemoveContainer" containerID="8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6" Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.202516 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6\": container with ID starting with 8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6 not found: ID does not exist" containerID="8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.202573 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6"} err="failed to get container status \"8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6\": rpc error: code = NotFound desc = could not find container \"8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6\": container with ID starting with 8a10e60e1fd6aaa872cfda33d6443cb8df5c961fc6033f9b285252cdf65110c6 not found: ID does not exist" Mar 08 00:43:54 crc kubenswrapper[4762]: W0308 00:43:54.209453 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8bdde6_0986_49af_98a6_9879bd12953c.slice/crio-6a8fc056ac466d7f36bc85463f32cc6e5b4d5c78aeda33785370a1020db1a30a WatchSource:0}: Error finding container 6a8fc056ac466d7f36bc85463f32cc6e5b4d5c78aeda33785370a1020db1a30a: Status 404 returned error can't find the container with id 6a8fc056ac466d7f36bc85463f32cc6e5b4d5c78aeda33785370a1020db1a30a Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.227373 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.250256 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.296251 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.446786 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.447605 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="init" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.447622 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="init" Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.447631 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="dnsmasq-dns" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.447690 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="dnsmasq-dns" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.447878 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" containerName="dnsmasq-dns" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.463728 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.465127 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.467614 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.474279 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.475752 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.498029 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-slgkv" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.571493 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.581932 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-cache\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.581984 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-lock\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.582057 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5158d2-f742-4eef-8c66-f2db685aeb9e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.582087 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwxj\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-kube-api-access-lmwxj\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.582138 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.582195 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.671231 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684075 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-cache\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684120 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-lock\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684179 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5158d2-f742-4eef-8c66-f2db685aeb9e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684209 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwxj\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-kube-api-access-lmwxj\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684247 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.684288 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.684452 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.684476 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:43:54 crc kubenswrapper[4762]: E0308 00:43:54.684522 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:43:55.18450424 +0000 UTC m=+1256.658648584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.685009 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-cache\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.685350 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/eb5158d2-f742-4eef-8c66-f2db685aeb9e-lock\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.686264 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.692908 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb5158d2-f742-4eef-8c66-f2db685aeb9e-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.703469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwxj\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-kube-api-access-lmwxj\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.712883 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.964863 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ch7rd"] Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.965888 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.967454 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.967657 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.967682 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 08 00:43:54 crc kubenswrapper[4762]: I0308 00:43:54.981287 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ch7rd"] Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090517 4762 generic.go:334] "Generic (PLEG): container finished" podID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerID="b330aaa1c9064ad182dd0434ffad96a778c74be173d78d369e5fcaca90bb0036" exitCode=0 Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090552 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090590 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" event={"ID":"2d8bdde6-0986-49af-98a6-9879bd12953c","Type":"ContainerDied","Data":"b330aaa1c9064ad182dd0434ffad96a778c74be173d78d369e5fcaca90bb0036"} Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090625 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090636 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" event={"ID":"2d8bdde6-0986-49af-98a6-9879bd12953c","Type":"ContainerStarted","Data":"6a8fc056ac466d7f36bc85463f32cc6e5b4d5c78aeda33785370a1020db1a30a"} Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090672 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.090861 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.091158 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.091484 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.091520 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.091583 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.091606 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24n9z\" (UniqueName: \"kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.138443 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.145882 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203333 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203457 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203597 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203697 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203752 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203916 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.203966 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24n9z\" (UniqueName: \"kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.205147 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.207049 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: E0308 00:43:55.207333 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:43:55 crc kubenswrapper[4762]: E0308 00:43:55.207359 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:43:55 crc kubenswrapper[4762]: E0308 00:43:55.207407 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:43:56.207387165 +0000 UTC m=+1257.681531529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.208302 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.212182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.215050 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.215687 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.231033 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24n9z\" (UniqueName: \"kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z\") pod \"swift-ring-rebalance-ch7rd\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.277130 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9af8496-3ac1-4adc-b887-6fb0d2a02f16" path="/var/lib/kubelet/pods/c9af8496-3ac1-4adc-b887-6fb0d2a02f16/volumes" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.284565 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.421914 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.453376 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.455190 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.460408 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.460744 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.460944 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-89926" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.461083 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.495495 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.609675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.609751 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.609803 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.609847 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-scripts\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.610101 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft95t\" (UniqueName: \"kubernetes.io/projected/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-kube-api-access-ft95t\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.610158 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-config\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.610301 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712038 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712115 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-scripts\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712210 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft95t\" (UniqueName: \"kubernetes.io/projected/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-kube-api-access-ft95t\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712240 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-config\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712304 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712378 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.712423 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.713176 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-scripts\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.713216 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.713297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-config\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.720835 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.720955 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.721824 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.732672 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft95t\" (UniqueName: \"kubernetes.io/projected/522084fd-c43a-45ad-a62a-a6a24d4e1a1b-kube-api-access-ft95t\") pod \"ovn-northd-0\" (UID: \"522084fd-c43a-45ad-a62a-a6a24d4e1a1b\") " pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.796046 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 08 00:43:55 crc kubenswrapper[4762]: I0308 00:43:55.907391 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ch7rd"] Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.100868 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ch7rd" event={"ID":"6720c495-ef50-49f5-ae64-d3f0bcca1f68","Type":"ContainerStarted","Data":"32d014d61b1db483c42aceef098ed005b8d2d24aa3efc54809ed77d10ef63184"} Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.119998 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" event={"ID":"2d8bdde6-0986-49af-98a6-9879bd12953c","Type":"ContainerStarted","Data":"334753ee077fe567e4a6d0ba6c0b36d20002710037bad1964eb68c8614584e20"} Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.120083 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.126184 4762 generic.go:334] "Generic (PLEG): container finished" podID="02437d1d-337c-4013-92e1-69125f57e03f" containerID="c8094726bd70c9914115430c557943bb1dfd829b72cda14ae7a8bc2882dfe912" exitCode=0 Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.126844 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerDied","Data":"c8094726bd70c9914115430c557943bb1dfd829b72cda14ae7a8bc2882dfe912"} Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.140631 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" podStartSLOduration=3.14061377 podStartE2EDuration="3.14061377s" podCreationTimestamp="2026-03-08 00:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:43:56.132749831 +0000 UTC m=+1257.606894175" watchObservedRunningTime="2026-03-08 00:43:56.14061377 +0000 UTC m=+1257.614758114" Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.231676 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:56 crc kubenswrapper[4762]: E0308 00:43:56.231860 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:43:56 crc kubenswrapper[4762]: E0308 00:43:56.231890 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:43:56 crc kubenswrapper[4762]: E0308 00:43:56.231952 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:43:58.231935669 +0000 UTC m=+1259.706080013 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:43:56 crc kubenswrapper[4762]: I0308 00:43:56.335466 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 08 00:43:57 crc kubenswrapper[4762]: I0308 00:43:57.141894 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522084fd-c43a-45ad-a62a-a6a24d4e1a1b","Type":"ContainerStarted","Data":"992cfe73073ad0ce229b693853b279e5f2d38c63a08c3f5216389a6a85ac2650"} Mar 08 00:43:58 crc kubenswrapper[4762]: E0308 00:43:58.253887 4762 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:52060->38.102.83.196:38853: write tcp 38.102.83.196:52060->38.102.83.196:38853: write: broken pipe Mar 08 00:43:58 crc kubenswrapper[4762]: I0308 00:43:58.279885 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:43:58 crc kubenswrapper[4762]: E0308 00:43:58.280236 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:43:58 crc kubenswrapper[4762]: E0308 00:43:58.280261 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:43:58 crc kubenswrapper[4762]: E0308 00:43:58.280314 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:44:02.280295292 +0000 UTC m=+1263.754439636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:43:59 crc kubenswrapper[4762]: I0308 00:43:59.470062 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 08 00:43:59 crc kubenswrapper[4762]: I0308 00:43:59.470410 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 08 00:43:59 crc kubenswrapper[4762]: I0308 00:43:59.555791 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.146195 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548844-22sqx"] Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.147743 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.150021 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.151039 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.151221 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.168123 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548844-22sqx"] Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.221275 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw45b\" (UniqueName: \"kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b\") pod \"auto-csr-approver-29548844-22sqx\" (UID: \"448af2de-e96f-4db2-a811-66c9805f7f34\") " pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.264977 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.323211 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw45b\" (UniqueName: \"kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b\") pod \"auto-csr-approver-29548844-22sqx\" (UID: \"448af2de-e96f-4db2-a811-66c9805f7f34\") " pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.346587 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw45b\" (UniqueName: \"kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b\") pod \"auto-csr-approver-29548844-22sqx\" (UID: \"448af2de-e96f-4db2-a811-66c9805f7f34\") " pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.486210 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.884494 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.884580 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 08 00:44:00 crc kubenswrapper[4762]: I0308 00:44:00.961946 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.318161 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.460128 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-65c5-account-create-update-b49fz"] Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.461261 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.470607 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.492891 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jq6nb"] Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.494015 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.503621 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65c5-account-create-update-b49fz"] Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.531619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jq6nb"] Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.551564 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlkm7\" (UniqueName: \"kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.551675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.551714 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.551786 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs25j\" (UniqueName: \"kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.653114 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs25j\" (UniqueName: \"kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.653182 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlkm7\" (UniqueName: \"kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.653270 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.653567 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.654043 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.654357 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.672310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs25j\" (UniqueName: \"kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j\") pod \"glance-65c5-account-create-update-b49fz\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.678691 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlkm7\" (UniqueName: \"kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7\") pod \"glance-db-create-jq6nb\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.811953 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:01 crc kubenswrapper[4762]: I0308 00:44:01.825743 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.224475 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5282s"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.229731 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.232438 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5282s"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.266541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tk2p\" (UniqueName: \"kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.266629 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.332642 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-a691-account-create-update-v7fhx"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.333888 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.336583 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.340577 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a691-account-create-update-v7fhx"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.367681 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4mc\" (UniqueName: \"kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.367727 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tk2p\" (UniqueName: \"kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.367795 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.367850 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.367870 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: E0308 00:44:02.368009 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:44:02 crc kubenswrapper[4762]: E0308 00:44:02.368022 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:44:02 crc kubenswrapper[4762]: E0308 00:44:02.368055 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:44:10.368041597 +0000 UTC m=+1271.842185941 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.368486 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.383234 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tk2p\" (UniqueName: \"kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p\") pod \"keystone-db-create-5282s\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.426230 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5tsjp"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.427677 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.454348 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5tsjp"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.469515 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6tj\" (UniqueName: \"kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.469734 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.469869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4mc\" (UniqueName: \"kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.469955 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.470819 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.490727 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4mc\" (UniqueName: \"kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc\") pod \"keystone-a691-account-create-update-v7fhx\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.560547 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5282s" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.571965 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.572053 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6tj\" (UniqueName: \"kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.572593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.592430 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6tj\" (UniqueName: \"kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj\") pod \"placement-db-create-5tsjp\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.631033 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-991a-account-create-update-xvbkq"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.633232 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.635649 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.638528 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-991a-account-create-update-xvbkq"] Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.664287 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.674527 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.674602 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhqtc\" (UniqueName: \"kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.752988 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.782183 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.782246 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhqtc\" (UniqueName: \"kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.783220 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.804932 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhqtc\" (UniqueName: \"kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc\") pod \"placement-991a-account-create-update-xvbkq\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:02 crc kubenswrapper[4762]: I0308 00:44:02.975536 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.308954 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-jvbzs"] Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.310472 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.318941 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-jvbzs"] Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.392654 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.392715 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcg44\" (UniqueName: \"kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.435000 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-9d8d-account-create-update-q8xp9"] Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.436464 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.439424 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.445631 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9d8d-account-create-update-q8xp9"] Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.495454 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbqll\" (UniqueName: \"kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.495559 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.495590 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.496352 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.499434 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcg44\" (UniqueName: \"kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.523643 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcg44\" (UniqueName: \"kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44\") pod \"mysqld-exporter-openstack-db-create-jvbzs\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.601025 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbqll\" (UniqueName: \"kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.601099 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.602069 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.621270 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbqll\" (UniqueName: \"kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll\") pod \"mysqld-exporter-9d8d-account-create-update-q8xp9\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.680631 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.684931 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.744830 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.745413 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-b9mmd" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="dnsmasq-dns" containerID="cri-o://a53cbb95f7bafa9c033a12779c97e00308b3a95122df1fcd45a1dc75552a93f9" gracePeriod=10 Mar 08 00:44:03 crc kubenswrapper[4762]: I0308 00:44:03.758631 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.313297 4762 generic.go:334] "Generic (PLEG): container finished" podID="57550501-ec35-4064-8e34-d470df9c2721" containerID="a53cbb95f7bafa9c033a12779c97e00308b3a95122df1fcd45a1dc75552a93f9" exitCode=0 Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.313339 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-b9mmd" event={"ID":"57550501-ec35-4064-8e34-d470df9c2721","Type":"ContainerDied","Data":"a53cbb95f7bafa9c033a12779c97e00308b3a95122df1fcd45a1dc75552a93f9"} Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.446526 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.523348 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config\") pod \"57550501-ec35-4064-8e34-d470df9c2721\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.523389 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb\") pod \"57550501-ec35-4064-8e34-d470df9c2721\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.523421 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb\") pod \"57550501-ec35-4064-8e34-d470df9c2721\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.523477 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc\") pod \"57550501-ec35-4064-8e34-d470df9c2721\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.523500 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdvvl\" (UniqueName: \"kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl\") pod \"57550501-ec35-4064-8e34-d470df9c2721\" (UID: \"57550501-ec35-4064-8e34-d470df9c2721\") " Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.528678 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl" (OuterVolumeSpecName: "kube-api-access-pdvvl") pod "57550501-ec35-4064-8e34-d470df9c2721" (UID: "57550501-ec35-4064-8e34-d470df9c2721"). InnerVolumeSpecName "kube-api-access-pdvvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.626791 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdvvl\" (UniqueName: \"kubernetes.io/projected/57550501-ec35-4064-8e34-d470df9c2721-kube-api-access-pdvvl\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.655523 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "57550501-ec35-4064-8e34-d470df9c2721" (UID: "57550501-ec35-4064-8e34-d470df9c2721"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.705384 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-65c5-account-create-update-b49fz"] Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.709449 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "57550501-ec35-4064-8e34-d470df9c2721" (UID: "57550501-ec35-4064-8e34-d470df9c2721"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.728479 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.728501 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.793639 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config" (OuterVolumeSpecName: "config") pod "57550501-ec35-4064-8e34-d470df9c2721" (UID: "57550501-ec35-4064-8e34-d470df9c2721"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.794338 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "57550501-ec35-4064-8e34-d470df9c2721" (UID: "57550501-ec35-4064-8e34-d470df9c2721"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.830195 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-9d8d-account-create-update-q8xp9"] Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.832572 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.832586 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/57550501-ec35-4064-8e34-d470df9c2721-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.954571 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-a691-account-create-update-v7fhx"] Mar 08 00:44:04 crc kubenswrapper[4762]: I0308 00:44:04.988541 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548844-22sqx"] Mar 08 00:44:05 crc kubenswrapper[4762]: W0308 00:44:05.113932 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5aefe657_7268_4c06_be92_61d570355268.slice/crio-835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9 WatchSource:0}: Error finding container 835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9: Status 404 returned error can't find the container with id 835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9 Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.115130 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jq6nb"] Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.324932 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a691-account-create-update-v7fhx" event={"ID":"af6797d1-1e15-4695-8b5f-fc508fbc3dfb","Type":"ContainerStarted","Data":"25db8961e57edcd761d6dc322fc03d7a79964fa0852c152788c46b27ffd2d74d"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.326036 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548844-22sqx" event={"ID":"448af2de-e96f-4db2-a811-66c9805f7f34","Type":"ContainerStarted","Data":"d8af6c5e8d2f389de6a89cd5b189e3d37a49161ec5fe561adcea1fe3d757895a"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.328896 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerStarted","Data":"88d1a8325b11c7bbb84022f2d6dd6b89712b87aab6db08be2876c8bb2d89122f"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.330414 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" event={"ID":"2c9a41e2-e552-41bb-bf6f-393ab1186f83","Type":"ContainerStarted","Data":"fab596abb13bba16c1cf418306574ccb58d95ed9942b5401c5ccbc9b8dc04d7d"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.332676 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-b9mmd" event={"ID":"57550501-ec35-4064-8e34-d470df9c2721","Type":"ContainerDied","Data":"bc4013e399e6acc7351cde5a46db028917ee92d07c595fa17d92325060c31a19"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.332726 4762 scope.go:117] "RemoveContainer" containerID="a53cbb95f7bafa9c033a12779c97e00308b3a95122df1fcd45a1dc75552a93f9" Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.332750 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-b9mmd" Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.334170 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522084fd-c43a-45ad-a62a-a6a24d4e1a1b","Type":"ContainerStarted","Data":"b751052714d87715b7df1774026ec9f9202b85c1c38ee488b2f1f3c7db83a300"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.339896 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ch7rd" event={"ID":"6720c495-ef50-49f5-ae64-d3f0bcca1f68","Type":"ContainerStarted","Data":"8792c544325e308b756ad6cb52cc019c89a49cb96de49b874b5a9b8a43e95692"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.342234 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jq6nb" event={"ID":"5aefe657-7268-4c06-be92-61d570355268","Type":"ContainerStarted","Data":"835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.343406 4762 generic.go:334] "Generic (PLEG): container finished" podID="e6374dea-1aa3-48b3-811e-31eb24e6c789" containerID="933f19f7cf71b4c43a1d088bec5acd131c9ff5551db9087ee577867812dba2a5" exitCode=0 Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.343433 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65c5-account-create-update-b49fz" event={"ID":"e6374dea-1aa3-48b3-811e-31eb24e6c789","Type":"ContainerDied","Data":"933f19f7cf71b4c43a1d088bec5acd131c9ff5551db9087ee577867812dba2a5"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.343471 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65c5-account-create-update-b49fz" event={"ID":"e6374dea-1aa3-48b3-811e-31eb24e6c789","Type":"ContainerStarted","Data":"80256087d158275bf1730f1f6cbaf15ab6ec85553f224ef9118c02cf85dab94a"} Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.358070 4762 scope.go:117] "RemoveContainer" containerID="cdbafd6205dbc13ecda8d78e7c47961fa770ff4ae6944c9eb67eeb9be9eb26db" Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.362751 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.390837 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-b9mmd"] Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.416947 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ch7rd" podStartSLOduration=3.211902906 podStartE2EDuration="11.416926224s" podCreationTimestamp="2026-03-08 00:43:54 +0000 UTC" firstStartedPulling="2026-03-08 00:43:55.915874255 +0000 UTC m=+1257.390018629" lastFinishedPulling="2026-03-08 00:44:04.120897603 +0000 UTC m=+1265.595041947" observedRunningTime="2026-03-08 00:44:05.368968235 +0000 UTC m=+1266.843112579" watchObservedRunningTime="2026-03-08 00:44:05.416926224 +0000 UTC m=+1266.891070568" Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.448890 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5282s"] Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.471977 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-jvbzs"] Mar 08 00:44:05 crc kubenswrapper[4762]: W0308 00:44:05.490580 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b804311_8ca9_4928_b16c_626fb3fc2db1.slice/crio-d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3 WatchSource:0}: Error finding container d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3: Status 404 returned error can't find the container with id d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3 Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.498566 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-991a-account-create-update-xvbkq"] Mar 08 00:44:05 crc kubenswrapper[4762]: I0308 00:44:05.507505 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5tsjp"] Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.352351 4762 generic.go:334] "Generic (PLEG): container finished" podID="af6797d1-1e15-4695-8b5f-fc508fbc3dfb" containerID="398279af41c572272122adc5f50a56fe758dfc91325e94bb9c63cc3ceda81b6d" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.352515 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a691-account-create-update-v7fhx" event={"ID":"af6797d1-1e15-4695-8b5f-fc508fbc3dfb","Type":"ContainerDied","Data":"398279af41c572272122adc5f50a56fe758dfc91325e94bb9c63cc3ceda81b6d"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.353753 4762 generic.go:334] "Generic (PLEG): container finished" podID="5aefe657-7268-4c06-be92-61d570355268" containerID="1f715a019ce50952bb2791784585d9b0c888a620214525c6ba9e7ccb3f274f8c" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.353892 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jq6nb" event={"ID":"5aefe657-7268-4c06-be92-61d570355268","Type":"ContainerDied","Data":"1f715a019ce50952bb2791784585d9b0c888a620214525c6ba9e7ccb3f274f8c"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.356949 4762 generic.go:334] "Generic (PLEG): container finished" podID="c380979a-505d-4323-b423-54db896cbd32" containerID="75e99d917b83fb21cef835246d925843fcbed0e596de6a855c7ce486870551c6" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.357005 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-991a-account-create-update-xvbkq" event={"ID":"c380979a-505d-4323-b423-54db896cbd32","Type":"ContainerDied","Data":"75e99d917b83fb21cef835246d925843fcbed0e596de6a855c7ce486870551c6"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.357026 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-991a-account-create-update-xvbkq" event={"ID":"c380979a-505d-4323-b423-54db896cbd32","Type":"ContainerStarted","Data":"6487c616b9c04e83cebf4d3d95d17158054e69baee2b761478742987983dd7e1"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.358127 4762 generic.go:334] "Generic (PLEG): container finished" podID="f4c47864-f560-4bbd-81f2-a7b25e917468" containerID="8a3b965d36a6e5c64f0afc3a25ca694352fc43ca9c7cdc35e2ee6399ce87326a" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.358175 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5282s" event={"ID":"f4c47864-f560-4bbd-81f2-a7b25e917468","Type":"ContainerDied","Data":"8a3b965d36a6e5c64f0afc3a25ca694352fc43ca9c7cdc35e2ee6399ce87326a"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.358195 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5282s" event={"ID":"f4c47864-f560-4bbd-81f2-a7b25e917468","Type":"ContainerStarted","Data":"9f587c5a946b2782ee304c5a4aadc0043f267f30f9e2fd48d3f9c5706446494e"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.360358 4762 generic.go:334] "Generic (PLEG): container finished" podID="2b804311-8ca9-4928-b16c-626fb3fc2db1" containerID="37b82cc5a20bab4bdd0b2cab6aeb36bfb6dcec76124ac527bb1ba08076462dec" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.360399 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5tsjp" event={"ID":"2b804311-8ca9-4928-b16c-626fb3fc2db1","Type":"ContainerDied","Data":"37b82cc5a20bab4bdd0b2cab6aeb36bfb6dcec76124ac527bb1ba08076462dec"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.360421 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5tsjp" event={"ID":"2b804311-8ca9-4928-b16c-626fb3fc2db1","Type":"ContainerStarted","Data":"d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.362067 4762 generic.go:334] "Generic (PLEG): container finished" podID="bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" containerID="c69a57ed62af5f482936ad5b8089f8cfbbab35864c53b4484ecb53831e3025df" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.362103 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" event={"ID":"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a","Type":"ContainerDied","Data":"c69a57ed62af5f482936ad5b8089f8cfbbab35864c53b4484ecb53831e3025df"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.362117 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" event={"ID":"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a","Type":"ContainerStarted","Data":"3245fb5bc759a33b355a4ad51293073c8de3afc3ca19826b746c7ec05c62dbe8"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.363709 4762 generic.go:334] "Generic (PLEG): container finished" podID="2c9a41e2-e552-41bb-bf6f-393ab1186f83" containerID="e6813afa2ee807b691ba3cec79736918a08d042c84b845867f9defdc0025ca78" exitCode=0 Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.363746 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" event={"ID":"2c9a41e2-e552-41bb-bf6f-393ab1186f83","Type":"ContainerDied","Data":"e6813afa2ee807b691ba3cec79736918a08d042c84b845867f9defdc0025ca78"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.365063 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548844-22sqx" event={"ID":"448af2de-e96f-4db2-a811-66c9805f7f34","Type":"ContainerStarted","Data":"9158efa9b78f64d18ff46cf34e7e957a248e1599d62fb76715962dab0440e051"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.371840 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"522084fd-c43a-45ad-a62a-a6a24d4e1a1b","Type":"ContainerStarted","Data":"9203bdb884f94f2c44c05794bad243f6d9b93b398bf8b4d939dec1bc80adb4c9"} Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.372405 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.392118 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548844-22sqx" podStartSLOduration=5.611192582 podStartE2EDuration="6.392092634s" podCreationTimestamp="2026-03-08 00:44:00 +0000 UTC" firstStartedPulling="2026-03-08 00:44:05.024192948 +0000 UTC m=+1266.498337292" lastFinishedPulling="2026-03-08 00:44:05.805093 +0000 UTC m=+1267.279237344" observedRunningTime="2026-03-08 00:44:06.387448503 +0000 UTC m=+1267.861592857" watchObservedRunningTime="2026-03-08 00:44:06.392092634 +0000 UTC m=+1267.866237008" Mar 08 00:44:06 crc kubenswrapper[4762]: I0308 00:44:06.511953 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.922401043 podStartE2EDuration="11.51193448s" podCreationTimestamp="2026-03-08 00:43:55 +0000 UTC" firstStartedPulling="2026-03-08 00:43:56.354799965 +0000 UTC m=+1257.828944349" lastFinishedPulling="2026-03-08 00:44:03.944333442 +0000 UTC m=+1265.418477786" observedRunningTime="2026-03-08 00:44:06.505471163 +0000 UTC m=+1267.979615507" watchObservedRunningTime="2026-03-08 00:44:06.51193448 +0000 UTC m=+1267.986078824" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.275028 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57550501-ec35-4064-8e34-d470df9c2721" path="/var/lib/kubelet/pods/57550501-ec35-4064-8e34-d470df9c2721/volumes" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.381316 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-65c5-account-create-update-b49fz" event={"ID":"e6374dea-1aa3-48b3-811e-31eb24e6c789","Type":"ContainerDied","Data":"80256087d158275bf1730f1f6cbaf15ab6ec85553f224ef9118c02cf85dab94a"} Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.381359 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80256087d158275bf1730f1f6cbaf15ab6ec85553f224ef9118c02cf85dab94a" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.382876 4762 generic.go:334] "Generic (PLEG): container finished" podID="448af2de-e96f-4db2-a811-66c9805f7f34" containerID="9158efa9b78f64d18ff46cf34e7e957a248e1599d62fb76715962dab0440e051" exitCode=0 Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.383948 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548844-22sqx" event={"ID":"448af2de-e96f-4db2-a811-66c9805f7f34","Type":"ContainerDied","Data":"9158efa9b78f64d18ff46cf34e7e957a248e1599d62fb76715962dab0440e051"} Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.530711 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.716674 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts\") pod \"e6374dea-1aa3-48b3-811e-31eb24e6c789\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.716820 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs25j\" (UniqueName: \"kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j\") pod \"e6374dea-1aa3-48b3-811e-31eb24e6c789\" (UID: \"e6374dea-1aa3-48b3-811e-31eb24e6c789\") " Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.717390 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6374dea-1aa3-48b3-811e-31eb24e6c789" (UID: "e6374dea-1aa3-48b3-811e-31eb24e6c789"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.725373 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j" (OuterVolumeSpecName: "kube-api-access-fs25j") pod "e6374dea-1aa3-48b3-811e-31eb24e6c789" (UID: "e6374dea-1aa3-48b3-811e-31eb24e6c789"). InnerVolumeSpecName "kube-api-access-fs25j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.819773 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs25j\" (UniqueName: \"kubernetes.io/projected/e6374dea-1aa3-48b3-811e-31eb24e6c789-kube-api-access-fs25j\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.819829 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6374dea-1aa3-48b3-811e-31eb24e6c789-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:07 crc kubenswrapper[4762]: I0308 00:44:07.939601 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.083350 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-8bc97"] Mar 08 00:44:08 crc kubenswrapper[4762]: E0308 00:44:08.083823 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="dnsmasq-dns" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.083845 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="dnsmasq-dns" Mar 08 00:44:08 crc kubenswrapper[4762]: E0308 00:44:08.083874 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="init" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.083883 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="init" Mar 08 00:44:08 crc kubenswrapper[4762]: E0308 00:44:08.083896 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aefe657-7268-4c06-be92-61d570355268" containerName="mariadb-database-create" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.083904 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aefe657-7268-4c06-be92-61d570355268" containerName="mariadb-database-create" Mar 08 00:44:08 crc kubenswrapper[4762]: E0308 00:44:08.083914 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6374dea-1aa3-48b3-811e-31eb24e6c789" containerName="mariadb-account-create-update" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.083921 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6374dea-1aa3-48b3-811e-31eb24e6c789" containerName="mariadb-account-create-update" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.084155 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="57550501-ec35-4064-8e34-d470df9c2721" containerName="dnsmasq-dns" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.084201 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6374dea-1aa3-48b3-811e-31eb24e6c789" containerName="mariadb-account-create-update" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.084227 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aefe657-7268-4c06-be92-61d570355268" containerName="mariadb-database-create" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.085000 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.087221 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.125842 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8bc97"] Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.130695 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts\") pod \"5aefe657-7268-4c06-be92-61d570355268\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.131019 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlkm7\" (UniqueName: \"kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7\") pod \"5aefe657-7268-4c06-be92-61d570355268\" (UID: \"5aefe657-7268-4c06-be92-61d570355268\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.132263 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5aefe657-7268-4c06-be92-61d570355268" (UID: "5aefe657-7268-4c06-be92-61d570355268"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.137976 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7" (OuterVolumeSpecName: "kube-api-access-dlkm7") pod "5aefe657-7268-4c06-be92-61d570355268" (UID: "5aefe657-7268-4c06-be92-61d570355268"). InnerVolumeSpecName "kube-api-access-dlkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.240053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgkb\" (UniqueName: \"kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.240110 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.240173 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5aefe657-7268-4c06-be92-61d570355268-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.240216 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlkm7\" (UniqueName: \"kubernetes.io/projected/5aefe657-7268-4c06-be92-61d570355268-kube-api-access-dlkm7\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.241385 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.267856 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.274724 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.280867 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.310760 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.336642 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5282s" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.340856 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts\") pod \"2b804311-8ca9-4928-b16c-626fb3fc2db1\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.340892 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq6tj\" (UniqueName: \"kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj\") pod \"2b804311-8ca9-4928-b16c-626fb3fc2db1\" (UID: \"2b804311-8ca9-4928-b16c-626fb3fc2db1\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.341342 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srgkb\" (UniqueName: \"kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.341379 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.341530 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2b804311-8ca9-4928-b16c-626fb3fc2db1" (UID: "2b804311-8ca9-4928-b16c-626fb3fc2db1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.342063 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.348967 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj" (OuterVolumeSpecName: "kube-api-access-qq6tj") pod "2b804311-8ca9-4928-b16c-626fb3fc2db1" (UID: "2b804311-8ca9-4928-b16c-626fb3fc2db1"). InnerVolumeSpecName "kube-api-access-qq6tj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.403694 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srgkb\" (UniqueName: \"kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb\") pod \"root-account-create-update-8bc97\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.424519 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jq6nb" event={"ID":"5aefe657-7268-4c06-be92-61d570355268","Type":"ContainerDied","Data":"835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.424705 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835f974239f197aa566c5ecf454a7e79fab7d0848ac3e73cda791c94507400d9" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.424833 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jq6nb" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.439698 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerStarted","Data":"b9a950eff2784019b343bd8b9c24930cd52e8164dea92d72db1f2cefa3e85080"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.440710 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-991a-account-create-update-xvbkq" event={"ID":"c380979a-505d-4323-b423-54db896cbd32","Type":"ContainerDied","Data":"6487c616b9c04e83cebf4d3d95d17158054e69baee2b761478742987983dd7e1"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.440731 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6487c616b9c04e83cebf4d3d95d17158054e69baee2b761478742987983dd7e1" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.440791 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-991a-account-create-update-xvbkq" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442490 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhqtc\" (UniqueName: \"kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc\") pod \"c380979a-505d-4323-b423-54db896cbd32\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442549 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tk2p\" (UniqueName: \"kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p\") pod \"f4c47864-f560-4bbd-81f2-a7b25e917468\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442661 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbqll\" (UniqueName: \"kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll\") pod \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442691 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts\") pod \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442768 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts\") pod \"f4c47864-f560-4bbd-81f2-a7b25e917468\" (UID: \"f4c47864-f560-4bbd-81f2-a7b25e917468\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442883 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts\") pod \"c380979a-505d-4323-b423-54db896cbd32\" (UID: \"c380979a-505d-4323-b423-54db896cbd32\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442909 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4mc\" (UniqueName: \"kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc\") pod \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\" (UID: \"af6797d1-1e15-4695-8b5f-fc508fbc3dfb\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.442993 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcg44\" (UniqueName: \"kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44\") pod \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.443057 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts\") pod \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\" (UID: \"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.443091 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts\") pod \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\" (UID: \"2c9a41e2-e552-41bb-bf6f-393ab1186f83\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.443677 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2b804311-8ca9-4928-b16c-626fb3fc2db1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.443693 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq6tj\" (UniqueName: \"kubernetes.io/projected/2b804311-8ca9-4928-b16c-626fb3fc2db1-kube-api-access-qq6tj\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.447478 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" (UID: "bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.459892 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "af6797d1-1e15-4695-8b5f-fc508fbc3dfb" (UID: "af6797d1-1e15-4695-8b5f-fc508fbc3dfb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.459925 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c380979a-505d-4323-b423-54db896cbd32" (UID: "c380979a-505d-4323-b423-54db896cbd32"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.460175 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" event={"ID":"bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a","Type":"ContainerDied","Data":"3245fb5bc759a33b355a4ad51293073c8de3afc3ca19826b746c7ec05c62dbe8"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.460206 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3245fb5bc759a33b355a4ad51293073c8de3afc3ca19826b746c7ec05c62dbe8" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.460271 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-jvbzs" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.460319 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4c47864-f560-4bbd-81f2-a7b25e917468" (UID: "f4c47864-f560-4bbd-81f2-a7b25e917468"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.463043 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.463496 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-9d8d-account-create-update-q8xp9" event={"ID":"2c9a41e2-e552-41bb-bf6f-393ab1186f83","Type":"ContainerDied","Data":"fab596abb13bba16c1cf418306574ccb58d95ed9942b5401c5ccbc9b8dc04d7d"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.463525 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab596abb13bba16c1cf418306574ccb58d95ed9942b5401c5ccbc9b8dc04d7d" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.463592 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p" (OuterVolumeSpecName: "kube-api-access-8tk2p") pod "f4c47864-f560-4bbd-81f2-a7b25e917468" (UID: "f4c47864-f560-4bbd-81f2-a7b25e917468"). InnerVolumeSpecName "kube-api-access-8tk2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.463971 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2c9a41e2-e552-41bb-bf6f-393ab1186f83" (UID: "2c9a41e2-e552-41bb-bf6f-393ab1186f83"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.464097 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll" (OuterVolumeSpecName: "kube-api-access-jbqll") pod "2c9a41e2-e552-41bb-bf6f-393ab1186f83" (UID: "2c9a41e2-e552-41bb-bf6f-393ab1186f83"). InnerVolumeSpecName "kube-api-access-jbqll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.468167 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc" (OuterVolumeSpecName: "kube-api-access-vp4mc") pod "af6797d1-1e15-4695-8b5f-fc508fbc3dfb" (UID: "af6797d1-1e15-4695-8b5f-fc508fbc3dfb"). InnerVolumeSpecName "kube-api-access-vp4mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.480039 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc" (OuterVolumeSpecName: "kube-api-access-bhqtc") pod "c380979a-505d-4323-b423-54db896cbd32" (UID: "c380979a-505d-4323-b423-54db896cbd32"). InnerVolumeSpecName "kube-api-access-bhqtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.486519 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5282s" event={"ID":"f4c47864-f560-4bbd-81f2-a7b25e917468","Type":"ContainerDied","Data":"9f587c5a946b2782ee304c5a4aadc0043f267f30f9e2fd48d3f9c5706446494e"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.486561 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f587c5a946b2782ee304c5a4aadc0043f267f30f9e2fd48d3f9c5706446494e" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.486617 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5282s" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.498986 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44" (OuterVolumeSpecName: "kube-api-access-tcg44") pod "bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" (UID: "bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a"). InnerVolumeSpecName "kube-api-access-tcg44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.519391 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5tsjp" event={"ID":"2b804311-8ca9-4928-b16c-626fb3fc2db1","Type":"ContainerDied","Data":"d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.519429 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24ee3233e967095d1829e5b45af33c09973e8f3869ad1d6c4b6a339969132e3" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.519492 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5tsjp" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.547411 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-65c5-account-create-update-b49fz" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.547725 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-a691-account-create-update-v7fhx" event={"ID":"af6797d1-1e15-4695-8b5f-fc508fbc3dfb","Type":"ContainerDied","Data":"25db8961e57edcd761d6dc322fc03d7a79964fa0852c152788c46b27ffd2d74d"} Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.547832 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25db8961e57edcd761d6dc322fc03d7a79964fa0852c152788c46b27ffd2d74d" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.547935 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-a691-account-create-update-v7fhx" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.548736 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549159 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2c9a41e2-e552-41bb-bf6f-393ab1186f83-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549653 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhqtc\" (UniqueName: \"kubernetes.io/projected/c380979a-505d-4323-b423-54db896cbd32-kube-api-access-bhqtc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549854 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tk2p\" (UniqueName: \"kubernetes.io/projected/f4c47864-f560-4bbd-81f2-a7b25e917468-kube-api-access-8tk2p\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549870 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbqll\" (UniqueName: \"kubernetes.io/projected/2c9a41e2-e552-41bb-bf6f-393ab1186f83-kube-api-access-jbqll\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549883 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549894 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c47864-f560-4bbd-81f2-a7b25e917468-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549904 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c380979a-505d-4323-b423-54db896cbd32-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549915 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4mc\" (UniqueName: \"kubernetes.io/projected/af6797d1-1e15-4695-8b5f-fc508fbc3dfb-kube-api-access-vp4mc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.549925 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcg44\" (UniqueName: \"kubernetes.io/projected/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a-kube-api-access-tcg44\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.564237 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.851759 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.955948 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jw45b\" (UniqueName: \"kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b\") pod \"448af2de-e96f-4db2-a811-66c9805f7f34\" (UID: \"448af2de-e96f-4db2-a811-66c9805f7f34\") " Mar 08 00:44:08 crc kubenswrapper[4762]: I0308 00:44:08.960215 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b" (OuterVolumeSpecName: "kube-api-access-jw45b") pod "448af2de-e96f-4db2-a811-66c9805f7f34" (UID: "448af2de-e96f-4db2-a811-66c9805f7f34"). InnerVolumeSpecName "kube-api-access-jw45b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.058292 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jw45b\" (UniqueName: \"kubernetes.io/projected/448af2de-e96f-4db2-a811-66c9805f7f34-kube-api-access-jw45b\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.147076 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-8bc97"] Mar 08 00:44:09 crc kubenswrapper[4762]: W0308 00:44:09.156259 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddee864a4_cffe_402b_b563_ca6e8f6be174.slice/crio-49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb WatchSource:0}: Error finding container 49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb: Status 404 returned error can't find the container with id 49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.471288 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548838-b4q6k"] Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.479197 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548838-b4q6k"] Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.560006 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548844-22sqx" event={"ID":"448af2de-e96f-4db2-a811-66c9805f7f34","Type":"ContainerDied","Data":"d8af6c5e8d2f389de6a89cd5b189e3d37a49161ec5fe561adcea1fe3d757895a"} Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.560044 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548844-22sqx" Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.560052 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8af6c5e8d2f389de6a89cd5b189e3d37a49161ec5fe561adcea1fe3d757895a" Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.565545 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8bc97" event={"ID":"dee864a4-cffe-402b-b563-ca6e8f6be174","Type":"ContainerStarted","Data":"f83b11436a82b08be21c4d8c66576da56f79d198bc5395ff870d1315def214c9"} Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.565589 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8bc97" event={"ID":"dee864a4-cffe-402b-b563-ca6e8f6be174","Type":"ContainerStarted","Data":"49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb"} Mar 08 00:44:09 crc kubenswrapper[4762]: I0308 00:44:09.591196 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-8bc97" podStartSLOduration=1.59117792 podStartE2EDuration="1.59117792s" podCreationTimestamp="2026-03-08 00:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:09.584201057 +0000 UTC m=+1271.058345401" watchObservedRunningTime="2026-03-08 00:44:09.59117792 +0000 UTC m=+1271.065322254" Mar 08 00:44:10 crc kubenswrapper[4762]: I0308 00:44:10.383201 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:44:10 crc kubenswrapper[4762]: E0308 00:44:10.384276 4762 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 08 00:44:10 crc kubenswrapper[4762]: E0308 00:44:10.384294 4762 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 08 00:44:10 crc kubenswrapper[4762]: E0308 00:44:10.384332 4762 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift podName:eb5158d2-f742-4eef-8c66-f2db685aeb9e nodeName:}" failed. No retries permitted until 2026-03-08 00:44:26.384317304 +0000 UTC m=+1287.858461648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift") pod "swift-storage-0" (UID: "eb5158d2-f742-4eef-8c66-f2db685aeb9e") : configmap "swift-ring-files" not found Mar 08 00:44:10 crc kubenswrapper[4762]: I0308 00:44:10.577643 4762 generic.go:334] "Generic (PLEG): container finished" podID="dee864a4-cffe-402b-b563-ca6e8f6be174" containerID="f83b11436a82b08be21c4d8c66576da56f79d198bc5395ff870d1315def214c9" exitCode=0 Mar 08 00:44:10 crc kubenswrapper[4762]: I0308 00:44:10.577707 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8bc97" event={"ID":"dee864a4-cffe-402b-b563-ca6e8f6be174","Type":"ContainerDied","Data":"f83b11436a82b08be21c4d8c66576da56f79d198bc5395ff870d1315def214c9"} Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.282748 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c72edf-8a55-4f3a-8c65-da6a0b0531c9" path="/var/lib/kubelet/pods/28c72edf-8a55-4f3a-8c65-da6a0b0531c9/volumes" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.590256 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerStarted","Data":"1daf484e97e7fea5718964a071bf8832a8d2e40ace98e8a916f1cac8251a99ce"} Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.622605 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=10.492292577 podStartE2EDuration="48.622586699s" podCreationTimestamp="2026-03-08 00:43:23 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.813330426 +0000 UTC m=+1234.287474770" lastFinishedPulling="2026-03-08 00:44:10.943624548 +0000 UTC m=+1272.417768892" observedRunningTime="2026-03-08 00:44:11.617732561 +0000 UTC m=+1273.091876985" watchObservedRunningTime="2026-03-08 00:44:11.622586699 +0000 UTC m=+1273.096731043" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766068 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5mfsp"] Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766500 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b804311-8ca9-4928-b16c-626fb3fc2db1" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766517 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b804311-8ca9-4928-b16c-626fb3fc2db1" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766526 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766532 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766541 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c47864-f560-4bbd-81f2-a7b25e917468" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766548 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c47864-f560-4bbd-81f2-a7b25e917468" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766586 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c9a41e2-e552-41bb-bf6f-393ab1186f83" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766592 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c9a41e2-e552-41bb-bf6f-393ab1186f83" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766602 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448af2de-e96f-4db2-a811-66c9805f7f34" containerName="oc" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766609 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="448af2de-e96f-4db2-a811-66c9805f7f34" containerName="oc" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766615 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af6797d1-1e15-4695-8b5f-fc508fbc3dfb" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766621 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="af6797d1-1e15-4695-8b5f-fc508fbc3dfb" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: E0308 00:44:11.766636 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c380979a-505d-4323-b423-54db896cbd32" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766642 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c380979a-505d-4323-b423-54db896cbd32" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766844 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c9a41e2-e552-41bb-bf6f-393ab1186f83" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766871 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766884 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="af6797d1-1e15-4695-8b5f-fc508fbc3dfb" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766898 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c380979a-505d-4323-b423-54db896cbd32" containerName="mariadb-account-create-update" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766913 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b804311-8ca9-4928-b16c-626fb3fc2db1" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766924 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c47864-f560-4bbd-81f2-a7b25e917468" containerName="mariadb-database-create" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.766949 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="448af2de-e96f-4db2-a811-66c9805f7f34" containerName="oc" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.767551 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.770317 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.770543 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wrkbl" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.799403 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5mfsp"] Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.938096 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.938147 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.938170 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbtqd\" (UniqueName: \"kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:11 crc kubenswrapper[4762]: I0308 00:44:11.938198 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.039837 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.039897 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.039922 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbtqd\" (UniqueName: \"kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.039950 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.048315 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.049427 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.050416 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.063356 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbtqd\" (UniqueName: \"kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd\") pod \"glance-db-sync-5mfsp\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.092196 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.174010 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.242449 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts\") pod \"dee864a4-cffe-402b-b563-ca6e8f6be174\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.242567 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srgkb\" (UniqueName: \"kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb\") pod \"dee864a4-cffe-402b-b563-ca6e8f6be174\" (UID: \"dee864a4-cffe-402b-b563-ca6e8f6be174\") " Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.244821 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dee864a4-cffe-402b-b563-ca6e8f6be174" (UID: "dee864a4-cffe-402b-b563-ca6e8f6be174"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.259976 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb" (OuterVolumeSpecName: "kube-api-access-srgkb") pod "dee864a4-cffe-402b-b563-ca6e8f6be174" (UID: "dee864a4-cffe-402b-b563-ca6e8f6be174"). InnerVolumeSpecName "kube-api-access-srgkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.346130 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee864a4-cffe-402b-b563-ca6e8f6be174-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.346162 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srgkb\" (UniqueName: \"kubernetes.io/projected/dee864a4-cffe-402b-b563-ca6e8f6be174-kube-api-access-srgkb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.612257 4762 generic.go:334] "Generic (PLEG): container finished" podID="6720c495-ef50-49f5-ae64-d3f0bcca1f68" containerID="8792c544325e308b756ad6cb52cc019c89a49cb96de49b874b5a9b8a43e95692" exitCode=0 Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.612307 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ch7rd" event={"ID":"6720c495-ef50-49f5-ae64-d3f0bcca1f68","Type":"ContainerDied","Data":"8792c544325e308b756ad6cb52cc019c89a49cb96de49b874b5a9b8a43e95692"} Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.621289 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-8bc97" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.621369 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-8bc97" event={"ID":"dee864a4-cffe-402b-b563-ca6e8f6be174","Type":"ContainerDied","Data":"49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb"} Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.621396 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49b1c88a9ca46c211ab7ea4059a4a0ea80318a4e4fa89b20fa92205ca92225bb" Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.654568 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-55d845484c-b9ht8" podUID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" containerName="console" containerID="cri-o://c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760" gracePeriod=15 Mar 08 00:44:12 crc kubenswrapper[4762]: I0308 00:44:12.692153 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5mfsp"] Mar 08 00:44:12 crc kubenswrapper[4762]: W0308 00:44:12.742049 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podff70ef09_d101_4c3f_8a03_95b5fbe0b250.slice/crio-153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3 WatchSource:0}: Error finding container 153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3: Status 404 returned error can't find the container with id 153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3 Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.190172 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55d845484c-b9ht8_5ec74e4c-c493-4d6e-a18d-16477eacccc4/console/0.log" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.190232 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.270836 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.270910 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271002 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271074 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271101 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271193 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lcvm\" (UniqueName: \"kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271215 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle\") pod \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\" (UID: \"5ec74e4c-c493-4d6e-a18d-16477eacccc4\") " Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.271767 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca" (OuterVolumeSpecName: "service-ca") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.272282 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.272690 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config" (OuterVolumeSpecName: "console-config") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.272999 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.276710 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.277586 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm" (OuterVolumeSpecName: "kube-api-access-8lcvm") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "kube-api-access-8lcvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.303910 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5ec74e4c-c493-4d6e-a18d-16477eacccc4" (UID: "5ec74e4c-c493-4d6e-a18d-16477eacccc4"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373626 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lcvm\" (UniqueName: \"kubernetes.io/projected/5ec74e4c-c493-4d6e-a18d-16477eacccc4-kube-api-access-8lcvm\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373656 4762 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373669 4762 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-service-ca\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373678 4762 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373686 4762 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373695 4762 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.373703 4762 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5ec74e4c-c493-4d6e-a18d-16477eacccc4-console-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.619228 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w"] Mar 08 00:44:13 crc kubenswrapper[4762]: E0308 00:44:13.619836 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" containerName="console" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.619852 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" containerName="console" Mar 08 00:44:13 crc kubenswrapper[4762]: E0308 00:44:13.619867 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dee864a4-cffe-402b-b563-ca6e8f6be174" containerName="mariadb-account-create-update" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.619875 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="dee864a4-cffe-402b-b563-ca6e8f6be174" containerName="mariadb-account-create-update" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.620063 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="dee864a4-cffe-402b-b563-ca6e8f6be174" containerName="mariadb-account-create-update" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.620081 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" containerName="console" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.620667 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.630564 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w"] Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.631289 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5mfsp" event={"ID":"ff70ef09-d101-4c3f-8a03-95b5fbe0b250","Type":"ContainerStarted","Data":"153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3"} Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.636983 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-55d845484c-b9ht8_5ec74e4c-c493-4d6e-a18d-16477eacccc4/console/0.log" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.637039 4762 generic.go:334] "Generic (PLEG): container finished" podID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" containerID="c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760" exitCode=2 Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.637111 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-55d845484c-b9ht8" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.637171 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55d845484c-b9ht8" event={"ID":"5ec74e4c-c493-4d6e-a18d-16477eacccc4","Type":"ContainerDied","Data":"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760"} Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.637219 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-55d845484c-b9ht8" event={"ID":"5ec74e4c-c493-4d6e-a18d-16477eacccc4","Type":"ContainerDied","Data":"220dd47c8cd2347d9b196177cc7f6ae0e43d52a0fecba739f1a25acd33a03f4b"} Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.637237 4762 scope.go:117] "RemoveContainer" containerID="c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.669918 4762 scope.go:117] "RemoveContainer" containerID="c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760" Mar 08 00:44:13 crc kubenswrapper[4762]: E0308 00:44:13.675062 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760\": container with ID starting with c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760 not found: ID does not exist" containerID="c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.675136 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760"} err="failed to get container status \"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760\": rpc error: code = NotFound desc = could not find container \"c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760\": container with ID starting with c5ef5660a44bab0a8bebbcb647db24b7f35ebbd447e0d420e9e5106142915760 not found: ID does not exist" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.684901 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.698138 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-55d845484c-b9ht8"] Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.781143 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4wc9\" (UniqueName: \"kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.781346 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.837904 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-4077-account-create-update-xm99q"] Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.839179 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.843081 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.843190 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4077-account-create-update-xm99q"] Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.882959 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.883031 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4wc9\" (UniqueName: \"kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.883687 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.901388 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4wc9\" (UniqueName: \"kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9\") pod \"mysqld-exporter-openstack-cell1-db-create-zlj5w\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.923644 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.950123 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.985544 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj66m\" (UniqueName: \"kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:13 crc kubenswrapper[4762]: I0308 00:44:13.985623 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.086810 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.086893 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24n9z\" (UniqueName: \"kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.086975 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087035 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087082 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087112 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087136 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf\") pod \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\" (UID: \"6720c495-ef50-49f5-ae64-d3f0bcca1f68\") " Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087404 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj66m\" (UniqueName: \"kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087461 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.087563 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.089347 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.090836 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.103091 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.108759 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z" (OuterVolumeSpecName: "kube-api-access-24n9z") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "kube-api-access-24n9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.109153 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj66m\" (UniqueName: \"kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m\") pod \"mysqld-exporter-4077-account-create-update-xm99q\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.119805 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.124696 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.127558 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts" (OuterVolumeSpecName: "scripts") pod "6720c495-ef50-49f5-ae64-d3f0bcca1f68" (UID: "6720c495-ef50-49f5-ae64-d3f0bcca1f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.159185 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189410 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189444 4762 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189453 4762 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189463 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24n9z\" (UniqueName: \"kubernetes.io/projected/6720c495-ef50-49f5-ae64-d3f0bcca1f68-kube-api-access-24n9z\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189474 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6720c495-ef50-49f5-ae64-d3f0bcca1f68-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189482 4762 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6720c495-ef50-49f5-ae64-d3f0bcca1f68-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.189491 4762 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6720c495-ef50-49f5-ae64-d3f0bcca1f68-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.472782 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w"] Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.589093 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-8bc97"] Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.599939 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-8bc97"] Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.647466 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ch7rd" event={"ID":"6720c495-ef50-49f5-ae64-d3f0bcca1f68","Type":"ContainerDied","Data":"32d014d61b1db483c42aceef098ed005b8d2d24aa3efc54809ed77d10ef63184"} Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.647495 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d014d61b1db483c42aceef098ed005b8d2d24aa3efc54809ed77d10ef63184" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.647556 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ch7rd" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.676433 4762 generic.go:334] "Generic (PLEG): container finished" podID="a759d745-52d2-48f8-9848-172ace2b5120" containerID="7bdb2ea1f65eb7942f2f7e3865b6d5415488a631bf1ade298c09336b2f2e6d96" exitCode=0 Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.676487 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerDied","Data":"7bdb2ea1f65eb7942f2f7e3865b6d5415488a631bf1ade298c09336b2f2e6d96"} Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.684046 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" event={"ID":"520a38bb-14f1-4be1-a039-520016f372e0","Type":"ContainerStarted","Data":"1ef85f81ac3661518882b18eb76ac65f6a36cb49dc2f37fce44713f7db2f67af"} Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.684078 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" event={"ID":"520a38bb-14f1-4be1-a039-520016f372e0","Type":"ContainerStarted","Data":"b9bcd9ada8359cbd103ecc1d294fc0192dbc3f2d6596891258a46d3c2e1b7d97"} Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.726206 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" podStartSLOduration=1.72618954 podStartE2EDuration="1.72618954s" podCreationTimestamp="2026-03-08 00:44:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:14.718226277 +0000 UTC m=+1276.192370621" watchObservedRunningTime="2026-03-08 00:44:14.72618954 +0000 UTC m=+1276.200333884" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.731826 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:14 crc kubenswrapper[4762]: I0308 00:44:14.833136 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-4077-account-create-update-xm99q"] Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.279834 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ec74e4c-c493-4d6e-a18d-16477eacccc4" path="/var/lib/kubelet/pods/5ec74e4c-c493-4d6e-a18d-16477eacccc4/volumes" Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.281708 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dee864a4-cffe-402b-b563-ca6e8f6be174" path="/var/lib/kubelet/pods/dee864a4-cffe-402b-b563-ca6e8f6be174/volumes" Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.697877 4762 generic.go:334] "Generic (PLEG): container finished" podID="520a38bb-14f1-4be1-a039-520016f372e0" containerID="1ef85f81ac3661518882b18eb76ac65f6a36cb49dc2f37fce44713f7db2f67af" exitCode=0 Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.697957 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" event={"ID":"520a38bb-14f1-4be1-a039-520016f372e0","Type":"ContainerDied","Data":"1ef85f81ac3661518882b18eb76ac65f6a36cb49dc2f37fce44713f7db2f67af"} Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.702732 4762 generic.go:334] "Generic (PLEG): container finished" podID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerID="da1f3f1b9b29fd8f8086fd577e095c3fa0723111de086ccb0c46eea6316e3241" exitCode=0 Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.702791 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerDied","Data":"da1f3f1b9b29fd8f8086fd577e095c3fa0723111de086ccb0c46eea6316e3241"} Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.705327 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerStarted","Data":"32123171473b9e7900d27f96249cf6b9cd735efec4b5f24853235b76f21252f0"} Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.705748 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.711448 4762 generic.go:334] "Generic (PLEG): container finished" podID="eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" containerID="9c5b5bc6140bec8b935b6724af712306b9e97c1dcd59e4594f4fcc91bbfd18dd" exitCode=0 Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.711506 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" event={"ID":"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8","Type":"ContainerDied","Data":"9c5b5bc6140bec8b935b6724af712306b9e97c1dcd59e4594f4fcc91bbfd18dd"} Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.711539 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" event={"ID":"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8","Type":"ContainerStarted","Data":"c0277846d354527d920e816fdf8ba7a355044dae57ff0ae0d90e3243f4d15b63"} Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.748173 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=50.780774538 podStartE2EDuration="58.748157204s" podCreationTimestamp="2026-03-08 00:43:17 +0000 UTC" firstStartedPulling="2026-03-08 00:43:30.475564383 +0000 UTC m=+1231.949708727" lastFinishedPulling="2026-03-08 00:43:38.442947039 +0000 UTC m=+1239.917091393" observedRunningTime="2026-03-08 00:44:15.747657849 +0000 UTC m=+1277.221802193" watchObservedRunningTime="2026-03-08 00:44:15.748157204 +0000 UTC m=+1277.222301548" Mar 08 00:44:15 crc kubenswrapper[4762]: I0308 00:44:15.854194 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.300528 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.361499 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts\") pod \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.361713 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj66m\" (UniqueName: \"kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m\") pod \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\" (UID: \"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8\") " Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.363225 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" (UID: "eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.368580 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m" (OuterVolumeSpecName: "kube-api-access-wj66m") pod "eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" (UID: "eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8"). InnerVolumeSpecName "kube-api-access-wj66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.438903 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.463611 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj66m\" (UniqueName: \"kubernetes.io/projected/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-kube-api-access-wj66m\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.463646 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.564742 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts\") pod \"520a38bb-14f1-4be1-a039-520016f372e0\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.564797 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4wc9\" (UniqueName: \"kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9\") pod \"520a38bb-14f1-4be1-a039-520016f372e0\" (UID: \"520a38bb-14f1-4be1-a039-520016f372e0\") " Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.565407 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "520a38bb-14f1-4be1-a039-520016f372e0" (UID: "520a38bb-14f1-4be1-a039-520016f372e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.568064 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9" (OuterVolumeSpecName: "kube-api-access-g4wc9") pod "520a38bb-14f1-4be1-a039-520016f372e0" (UID: "520a38bb-14f1-4be1-a039-520016f372e0"). InnerVolumeSpecName "kube-api-access-g4wc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.667312 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/520a38bb-14f1-4be1-a039-520016f372e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.667339 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4wc9\" (UniqueName: \"kubernetes.io/projected/520a38bb-14f1-4be1-a039-520016f372e0-kube-api-access-g4wc9\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.731152 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" event={"ID":"eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8","Type":"ContainerDied","Data":"c0277846d354527d920e816fdf8ba7a355044dae57ff0ae0d90e3243f4d15b63"} Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.731190 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0277846d354527d920e816fdf8ba7a355044dae57ff0ae0d90e3243f4d15b63" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.731242 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-4077-account-create-update-xm99q" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.740322 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" event={"ID":"520a38bb-14f1-4be1-a039-520016f372e0","Type":"ContainerDied","Data":"b9bcd9ada8359cbd103ecc1d294fc0192dbc3f2d6596891258a46d3c2e1b7d97"} Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.740381 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9bcd9ada8359cbd103ecc1d294fc0192dbc3f2d6596891258a46d3c2e1b7d97" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.740465 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.749214 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerStarted","Data":"3cad6bed315c16867c9504799cfea0ea54b8be9f89ee1a5dc667c079b4f2b098"} Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.749466 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:44:17 crc kubenswrapper[4762]: I0308 00:44:17.782293 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=57.10934936 podStartE2EDuration="1m1.782278435s" podCreationTimestamp="2026-03-08 00:43:16 +0000 UTC" firstStartedPulling="2026-03-08 00:43:32.465104133 +0000 UTC m=+1233.939248467" lastFinishedPulling="2026-03-08 00:43:37.138033198 +0000 UTC m=+1238.612177542" observedRunningTime="2026-03-08 00:44:17.775531039 +0000 UTC m=+1279.249675383" watchObservedRunningTime="2026-03-08 00:44:17.782278435 +0000 UTC m=+1279.256422779" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.179536 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwrf4"] Mar 08 00:44:18 crc kubenswrapper[4762]: E0308 00:44:18.179949 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="520a38bb-14f1-4be1-a039-520016f372e0" containerName="mariadb-database-create" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.179966 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="520a38bb-14f1-4be1-a039-520016f372e0" containerName="mariadb-database-create" Mar 08 00:44:18 crc kubenswrapper[4762]: E0308 00:44:18.179982 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6720c495-ef50-49f5-ae64-d3f0bcca1f68" containerName="swift-ring-rebalance" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.179989 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6720c495-ef50-49f5-ae64-d3f0bcca1f68" containerName="swift-ring-rebalance" Mar 08 00:44:18 crc kubenswrapper[4762]: E0308 00:44:18.180001 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" containerName="mariadb-account-create-update" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.180007 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" containerName="mariadb-account-create-update" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.180164 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="520a38bb-14f1-4be1-a039-520016f372e0" containerName="mariadb-database-create" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.180183 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" containerName="mariadb-account-create-update" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.180200 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6720c495-ef50-49f5-ae64-d3f0bcca1f68" containerName="swift-ring-rebalance" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.180744 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.182987 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.208215 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwrf4"] Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.293749 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvfr\" (UniqueName: \"kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.293924 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.396808 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvfr\" (UniqueName: \"kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.397338 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.398288 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.417027 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvfr\" (UniqueName: \"kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr\") pod \"root-account-create-update-zwrf4\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.496644 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.974112 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.975207 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.977225 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 08 00:44:18 crc kubenswrapper[4762]: I0308 00:44:18.990738 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwrf4"] Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.009010 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.009062 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqskl\" (UniqueName: \"kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.009108 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.037524 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.110552 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.110608 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqskl\" (UniqueName: \"kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.110847 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.117444 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.117545 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.126679 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqskl\" (UniqueName: \"kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl\") pod \"mysqld-exporter-0\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.292684 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.751152 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.787461 4762 generic.go:334] "Generic (PLEG): container finished" podID="19c13643-eddc-4b3d-a256-09bb2d876192" containerID="78390e68e43b110eb2c6b6e7feb99eeadc532ef8ad8b7628fabfb5d3436d4c91" exitCode=0 Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.787553 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrf4" event={"ID":"19c13643-eddc-4b3d-a256-09bb2d876192","Type":"ContainerDied","Data":"78390e68e43b110eb2c6b6e7feb99eeadc532ef8ad8b7628fabfb5d3436d4c91"} Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.787579 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrf4" event={"ID":"19c13643-eddc-4b3d-a256-09bb2d876192","Type":"ContainerStarted","Data":"f94f43598dc44eeae8bbdb60a3c291858086e530909440d131f8470803ae764b"} Mar 08 00:44:19 crc kubenswrapper[4762]: I0308 00:44:19.789303 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc","Type":"ContainerStarted","Data":"be06e6457d482e4ce6c1954769aaffaf16f4e498cc8a95ba4acb35c349d97e05"} Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.576815 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kkckg" podUID="bcab1df7-ddcc-4784-8a49-0be5161590f2" containerName="ovn-controller" probeResult="failure" output=< Mar 08 00:44:21 crc kubenswrapper[4762]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 08 00:44:21 crc kubenswrapper[4762]: > Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.681512 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.685069 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ffhbt" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.902740 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-kkckg-config-gctn8"] Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.904252 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.912605 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.948336 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kkckg-config-gctn8"] Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986091 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986137 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986159 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986197 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986214 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2n6v\" (UniqueName: \"kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:21 crc kubenswrapper[4762]: I0308 00:44:21.986243 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.087877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.087974 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088011 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088061 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088089 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2n6v\" (UniqueName: \"kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088145 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088401 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088455 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088479 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.088798 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.092285 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.113700 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2n6v\" (UniqueName: \"kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v\") pod \"ovn-controller-kkckg-config-gctn8\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:22 crc kubenswrapper[4762]: I0308 00:44:22.232748 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:24 crc kubenswrapper[4762]: I0308 00:44:24.732154 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:24 crc kubenswrapper[4762]: I0308 00:44:24.735182 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:24 crc kubenswrapper[4762]: I0308 00:44:24.847452 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.483413 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.503994 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb5158d2-f742-4eef-8c66-f2db685aeb9e-etc-swift\") pod \"swift-storage-0\" (UID: \"eb5158d2-f742-4eef-8c66-f2db685aeb9e\") " pod="openstack/swift-storage-0" Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.567343 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.591450 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.603409 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-kkckg" podUID="bcab1df7-ddcc-4784-8a49-0be5161590f2" containerName="ovn-controller" probeResult="failure" output=< Mar 08 00:44:26 crc kubenswrapper[4762]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 08 00:44:26 crc kubenswrapper[4762]: > Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.862893 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="thanos-sidecar" containerID="cri-o://1daf484e97e7fea5718964a071bf8832a8d2e40ace98e8a916f1cac8251a99ce" gracePeriod=600 Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.862994 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="config-reloader" containerID="cri-o://b9a950eff2784019b343bd8b9c24930cd52e8164dea92d72db1f2cefa3e85080" gracePeriod=600 Mar 08 00:44:26 crc kubenswrapper[4762]: I0308 00:44:26.862823 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="prometheus" containerID="cri-o://88d1a8325b11c7bbb84022f2d6dd6b89712b87aab6db08be2876c8bb2d89122f" gracePeriod=600 Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876415 4762 generic.go:334] "Generic (PLEG): container finished" podID="02437d1d-337c-4013-92e1-69125f57e03f" containerID="1daf484e97e7fea5718964a071bf8832a8d2e40ace98e8a916f1cac8251a99ce" exitCode=0 Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876733 4762 generic.go:334] "Generic (PLEG): container finished" podID="02437d1d-337c-4013-92e1-69125f57e03f" containerID="b9a950eff2784019b343bd8b9c24930cd52e8164dea92d72db1f2cefa3e85080" exitCode=0 Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876744 4762 generic.go:334] "Generic (PLEG): container finished" podID="02437d1d-337c-4013-92e1-69125f57e03f" containerID="88d1a8325b11c7bbb84022f2d6dd6b89712b87aab6db08be2876c8bb2d89122f" exitCode=0 Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876517 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerDied","Data":"1daf484e97e7fea5718964a071bf8832a8d2e40ace98e8a916f1cac8251a99ce"} Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876850 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerDied","Data":"b9a950eff2784019b343bd8b9c24930cd52e8164dea92d72db1f2cefa3e85080"} Mar 08 00:44:27 crc kubenswrapper[4762]: I0308 00:44:27.876878 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerDied","Data":"88d1a8325b11c7bbb84022f2d6dd6b89712b87aab6db08be2876c8bb2d89122f"} Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.134971 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.433362 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.637665 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.731154 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvfr\" (UniqueName: \"kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr\") pod \"19c13643-eddc-4b3d-a256-09bb2d876192\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.731490 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts\") pod \"19c13643-eddc-4b3d-a256-09bb2d876192\" (UID: \"19c13643-eddc-4b3d-a256-09bb2d876192\") " Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.732147 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19c13643-eddc-4b3d-a256-09bb2d876192" (UID: "19c13643-eddc-4b3d-a256-09bb2d876192"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.739080 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr" (OuterVolumeSpecName: "kube-api-access-mkvfr") pod "19c13643-eddc-4b3d-a256-09bb2d876192" (UID: "19c13643-eddc-4b3d-a256-09bb2d876192"). InnerVolumeSpecName "kube-api-access-mkvfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.832610 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvfr\" (UniqueName: \"kubernetes.io/projected/19c13643-eddc-4b3d-a256-09bb2d876192-kube-api-access-mkvfr\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.832652 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19c13643-eddc-4b3d-a256-09bb2d876192-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.894833 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrf4" event={"ID":"19c13643-eddc-4b3d-a256-09bb2d876192","Type":"ContainerDied","Data":"f94f43598dc44eeae8bbdb60a3c291858086e530909440d131f8470803ae764b"} Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.894859 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrf4" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.894875 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f94f43598dc44eeae8bbdb60a3c291858086e530909440d131f8470803ae764b" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.907006 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc","Type":"ContainerStarted","Data":"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98"} Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.931614 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.096689302 podStartE2EDuration="10.931591498s" podCreationTimestamp="2026-03-08 00:44:18 +0000 UTC" firstStartedPulling="2026-03-08 00:44:19.766073165 +0000 UTC m=+1281.240217519" lastFinishedPulling="2026-03-08 00:44:28.600975371 +0000 UTC m=+1290.075119715" observedRunningTime="2026-03-08 00:44:28.923707218 +0000 UTC m=+1290.397851582" watchObservedRunningTime="2026-03-08 00:44:28.931591498 +0000 UTC m=+1290.405735862" Mar 08 00:44:28 crc kubenswrapper[4762]: I0308 00:44:28.937011 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035574 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035640 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035701 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035738 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035802 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035841 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.035869 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.037274 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.037468 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.037681 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.047486 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config" (OuterVolumeSpecName: "config") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.047624 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out" (OuterVolumeSpecName: "config-out") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.057371 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.102821 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config" (OuterVolumeSpecName: "web-config") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.133058 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-kkckg-config-gctn8"] Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.136947 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq6hr\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.136990 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137018 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file\") pod \"02437d1d-337c-4013-92e1-69125f57e03f\" (UID: \"02437d1d-337c-4013-92e1-69125f57e03f\") " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137848 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137877 4762 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137932 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137950 4762 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137967 4762 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-web-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.137979 4762 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/02437d1d-337c-4013-92e1-69125f57e03f-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.138008 4762 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/02437d1d-337c-4013-92e1-69125f57e03f-config-out\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.142241 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr" (OuterVolumeSpecName: "kube-api-access-rq6hr") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "kube-api-access-rq6hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.143508 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.150537 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "02437d1d-337c-4013-92e1-69125f57e03f" (UID: "02437d1d-337c-4013-92e1-69125f57e03f"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.188670 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.239590 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq6hr\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-kube-api-access-rq6hr\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.239643 4762 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/02437d1d-337c-4013-92e1-69125f57e03f-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.239668 4762 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/02437d1d-337c-4013-92e1-69125f57e03f-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.239680 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.246564 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 08 00:44:29 crc kubenswrapper[4762]: W0308 00:44:29.271244 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb5158d2_f742_4eef_8c66_f2db685aeb9e.slice/crio-a32f5e55d7356ab7ce1af8b3129686b191e55dc4453edddc2a224bb46b292075 WatchSource:0}: Error finding container a32f5e55d7356ab7ce1af8b3129686b191e55dc4453edddc2a224bb46b292075: Status 404 returned error can't find the container with id a32f5e55d7356ab7ce1af8b3129686b191e55dc4453edddc2a224bb46b292075 Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.917240 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5mfsp" event={"ID":"ff70ef09-d101-4c3f-8a03-95b5fbe0b250","Type":"ContainerStarted","Data":"17036cd87a89710f741f57d9c87eb414a74f9dcd46354fcc280150141a4acc8a"} Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.920566 4762 generic.go:334] "Generic (PLEG): container finished" podID="453668f3-2ec0-4f51-a433-47426565e055" containerID="0f2f9d8dbf21fd8bfcdf8b0129c54ed1381f9c1ec33a9a01ecbe25f718a918ad" exitCode=0 Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.920712 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kkckg-config-gctn8" event={"ID":"453668f3-2ec0-4f51-a433-47426565e055","Type":"ContainerDied","Data":"0f2f9d8dbf21fd8bfcdf8b0129c54ed1381f9c1ec33a9a01ecbe25f718a918ad"} Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.920740 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kkckg-config-gctn8" event={"ID":"453668f3-2ec0-4f51-a433-47426565e055","Type":"ContainerStarted","Data":"2a68f81edf8df247e29490911e9ce349c01b7e2615cfc3d65a661679756f892a"} Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.926820 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"02437d1d-337c-4013-92e1-69125f57e03f","Type":"ContainerDied","Data":"dd5e5c5182ac68bc25b470fb5ac90373ef6b19e16386618d58c133b839bcf349"} Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.926862 4762 scope.go:117] "RemoveContainer" containerID="1daf484e97e7fea5718964a071bf8832a8d2e40ace98e8a916f1cac8251a99ce" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.926996 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.930520 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"a32f5e55d7356ab7ce1af8b3129686b191e55dc4453edddc2a224bb46b292075"} Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.949900 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5mfsp" podStartSLOduration=3.094583048 podStartE2EDuration="18.949884161s" podCreationTimestamp="2026-03-08 00:44:11 +0000 UTC" firstStartedPulling="2026-03-08 00:44:12.744861794 +0000 UTC m=+1274.219006128" lastFinishedPulling="2026-03-08 00:44:28.600162897 +0000 UTC m=+1290.074307241" observedRunningTime="2026-03-08 00:44:29.943864188 +0000 UTC m=+1291.418008532" watchObservedRunningTime="2026-03-08 00:44:29.949884161 +0000 UTC m=+1291.424028505" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.953568 4762 scope.go:117] "RemoveContainer" containerID="b9a950eff2784019b343bd8b9c24930cd52e8164dea92d72db1f2cefa3e85080" Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.975859 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:29 crc kubenswrapper[4762]: I0308 00:44:29.995370 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.015428 4762 scope.go:117] "RemoveContainer" containerID="88d1a8325b11c7bbb84022f2d6dd6b89712b87aab6db08be2876c8bb2d89122f" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.043726 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:30 crc kubenswrapper[4762]: E0308 00:44:30.044070 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="prometheus" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044088 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="prometheus" Mar 08 00:44:30 crc kubenswrapper[4762]: E0308 00:44:30.044097 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="config-reloader" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044105 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="config-reloader" Mar 08 00:44:30 crc kubenswrapper[4762]: E0308 00:44:30.044115 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c13643-eddc-4b3d-a256-09bb2d876192" containerName="mariadb-account-create-update" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044123 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c13643-eddc-4b3d-a256-09bb2d876192" containerName="mariadb-account-create-update" Mar 08 00:44:30 crc kubenswrapper[4762]: E0308 00:44:30.044140 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="thanos-sidecar" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044146 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="thanos-sidecar" Mar 08 00:44:30 crc kubenswrapper[4762]: E0308 00:44:30.044157 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="init-config-reloader" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044164 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="init-config-reloader" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044324 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="prometheus" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044338 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c13643-eddc-4b3d-a256-09bb2d876192" containerName="mariadb-account-create-update" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044354 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="config-reloader" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.044363 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="02437d1d-337c-4013-92e1-69125f57e03f" containerName="thanos-sidecar" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.045846 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.059480 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.059743 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-gpv7p" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.059911 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.060247 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.060377 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.060475 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.060568 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.060572 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.069247 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.076454 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-pc8mm"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.077679 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.129292 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.142964 4762 scope.go:117] "RemoveContainer" containerID="c8094726bd70c9914115430c557943bb1dfd829b72cda14ae7a8bc2882dfe912" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.156398 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.156487 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.156709 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.156822 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.173851 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.173947 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174007 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174042 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8103d22d-043e-4af1-a19d-307905e2a05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174075 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174122 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174165 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd727\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-kube-api-access-xd727\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174192 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.174280 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.173852 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-pc8mm"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.225948 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-c616-account-create-update-tvdx2"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.227165 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.247120 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285334 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285393 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285488 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285574 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285626 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285663 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285703 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285729 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6cx\" (UniqueName: \"kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285781 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285811 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8103d22d-043e-4af1-a19d-307905e2a05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285834 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285875 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285903 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd727\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-kube-api-access-xd727\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.285928 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.288102 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.292117 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.292613 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.293463 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8103d22d-043e-4af1-a19d-307905e2a05f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.315519 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.315822 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.317110 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.317148 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c616-account-create-update-tvdx2"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.322552 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.333220 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8103d22d-043e-4af1-a19d-307905e2a05f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.333299 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.335037 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.335178 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8103d22d-043e-4af1-a19d-307905e2a05f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.344462 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd727\" (UniqueName: \"kubernetes.io/projected/8103d22d-043e-4af1-a19d-307905e2a05f-kube-api-access-xd727\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.353078 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"prometheus-metric-storage-0\" (UID: \"8103d22d-043e-4af1-a19d-307905e2a05f\") " pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.382289 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.387860 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.388029 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sj44\" (UniqueName: \"kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.388053 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m6cx\" (UniqueName: \"kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.388102 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.388810 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.392699 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-82a0-account-create-update-c5g4h"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.394167 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.405063 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.488996 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-82a0-account-create-update-c5g4h"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.482851 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m6cx\" (UniqueName: \"kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx\") pod \"heat-db-create-pc8mm\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.500256 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.500421 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.500507 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.500640 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sj44\" (UniqueName: \"kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.505677 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.510242 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.548567 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-f242d"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.557701 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.575279 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-8nwnr"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.576466 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.589207 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.589470 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.589566 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sj44\" (UniqueName: \"kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44\") pod \"heat-c616-account-create-update-tvdx2\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.589631 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rwwq6" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.589730 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.590099 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f242d"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.591573 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8nwnr"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.606881 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.607002 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.607892 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.616373 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mhxgz"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.617576 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.627440 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr\") pod \"cinder-82a0-account-create-update-c5g4h\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.649843 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mhxgz"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.687319 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fd64-account-create-update-dlhcc"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.688514 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.694387 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.694897 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd64-account-create-update-dlhcc"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.708823 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glj99\" (UniqueName: \"kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.708867 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgr9\" (UniqueName: \"kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.708894 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p4p\" (UniqueName: \"kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.708931 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.708982 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.709030 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.709048 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.759992 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-76c6s"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.761251 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.793291 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4682-account-create-update-fmc2n"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.794836 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.797007 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.802000 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.804950 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.809789 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-76c6s"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810622 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgr9\" (UniqueName: \"kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810653 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p4p\" (UniqueName: \"kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810695 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810744 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jgzs\" (UniqueName: \"kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810927 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.810987 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.811006 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.811050 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glj99\" (UniqueName: \"kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.816357 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.817045 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.826931 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4682-account-create-update-fmc2n"] Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.830659 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.831335 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.833635 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glj99\" (UniqueName: \"kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99\") pod \"cinder-db-create-f242d\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.835162 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p4p\" (UniqueName: \"kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p\") pod \"keystone-db-sync-8nwnr\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.843100 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgr9\" (UniqueName: \"kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9\") pod \"neutron-db-create-mhxgz\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915143 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f242d" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915375 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915415 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915472 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915500 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jgzs\" (UniqueName: \"kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915555 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf564\" (UniqueName: \"kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.915609 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s56x5\" (UniqueName: \"kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.916360 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.926145 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.945180 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jgzs\" (UniqueName: \"kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs\") pod \"neutron-fd64-account-create-update-dlhcc\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:30 crc kubenswrapper[4762]: I0308 00:44:30.951994 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.017066 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s56x5\" (UniqueName: \"kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.017156 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.017183 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.017285 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf564\" (UniqueName: \"kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.018782 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.020457 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.035375 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.041298 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s56x5\" (UniqueName: \"kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5\") pod \"barbican-db-create-76c6s\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.050561 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf564\" (UniqueName: \"kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564\") pod \"barbican-4682-account-create-update-fmc2n\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.083620 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.141675 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-pc8mm"] Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.147726 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.159096 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.292915 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02437d1d-337c-4013-92e1-69125f57e03f" path="/var/lib/kubelet/pods/02437d1d-337c-4013-92e1-69125f57e03f/volumes" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.459676 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-82a0-account-create-update-c5g4h"] Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.563878 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-c616-account-create-update-tvdx2"] Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.579247 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-kkckg" Mar 08 00:44:31 crc kubenswrapper[4762]: I0308 00:44:31.732114 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-f242d"] Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.170387 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1da1613_f9de_4860_a63a_1ecd85e8f340.slice/crio-0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f WatchSource:0}: Error finding container 0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f: Status 404 returned error can't find the container with id 0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.174711 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2844b501_49b8_4b08_adbe_30159ca77f47.slice/crio-f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4 WatchSource:0}: Error finding container f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4: Status 404 returned error can't find the container with id f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4 Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.183064 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8103d22d_043e_4af1_a19d_307905e2a05f.slice/crio-79837a458ac19a96fcabcdb66ac986f261e849fa5ad607cd4c9586a623c830d4 WatchSource:0}: Error finding container 79837a458ac19a96fcabcdb66ac986f261e849fa5ad607cd4c9586a623c830d4: Status 404 returned error can't find the container with id 79837a458ac19a96fcabcdb66ac986f261e849fa5ad607cd4c9586a623c830d4 Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.185379 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda255bbb_75db_4a07_8547_2bf0794edd04.slice/crio-f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c WatchSource:0}: Error finding container f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c: Status 404 returned error can't find the container with id f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.310559 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.444403 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2n6v\" (UniqueName: \"kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.444803 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.444838 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445404 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445634 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts" (OuterVolumeSpecName: "scripts") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445736 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445794 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445871 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445890 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn\") pod \"453668f3-2ec0-4f51-a433-47426565e055\" (UID: \"453668f3-2ec0-4f51-a433-47426565e055\") " Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.445941 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run" (OuterVolumeSpecName: "var-run") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446491 4762 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446514 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/453668f3-2ec0-4f51-a433-47426565e055-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446526 4762 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446539 4762 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-run\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.446550 4762 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/453668f3-2ec0-4f51-a433-47426565e055-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.451096 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v" (OuterVolumeSpecName: "kube-api-access-d2n6v") pod "453668f3-2ec0-4f51-a433-47426565e055" (UID: "453668f3-2ec0-4f51-a433-47426565e055"). InnerVolumeSpecName "kube-api-access-d2n6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.548107 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2n6v\" (UniqueName: \"kubernetes.io/projected/453668f3-2ec0-4f51-a433-47426565e055-kube-api-access-d2n6v\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.625888 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-8nwnr"] Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.678371 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9bfc6a86_5a16_4814_9de9_f8cf060a966f.slice/crio-189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b WatchSource:0}: Error finding container 189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b: Status 404 returned error can't find the container with id 189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.812164 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mhxgz"] Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.820111 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55ac7177_d14a_4b66_bab1_d8de8b6d8bdb.slice/crio-cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb WatchSource:0}: Error finding container cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb: Status 404 returned error can't find the container with id cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.937902 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae10ab24_1407_4b27_97ab_3424e4b85a03.slice/crio-76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16 WatchSource:0}: Error finding container 76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16: Status 404 returned error can't find the container with id 76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16 Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.940413 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4682-account-create-update-fmc2n"] Mar 08 00:44:32 crc kubenswrapper[4762]: W0308 00:44:32.994225 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd11fd63c_9a18_4c14_a7fc_68bca559ce0f.slice/crio-56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a WatchSource:0}: Error finding container 56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a: Status 404 returned error can't find the container with id 56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a Mar 08 00:44:32 crc kubenswrapper[4762]: I0308 00:44:32.998448 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-76c6s"] Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.001362 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-kkckg-config-gctn8" event={"ID":"453668f3-2ec0-4f51-a433-47426565e055","Type":"ContainerDied","Data":"2a68f81edf8df247e29490911e9ce349c01b7e2615cfc3d65a661679756f892a"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.001394 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a68f81edf8df247e29490911e9ce349c01b7e2615cfc3d65a661679756f892a" Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.001417 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-kkckg-config-gctn8" Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.007457 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4682-account-create-update-fmc2n" event={"ID":"ae10ab24-1407-4b27-97ab-3424e4b85a03","Type":"ContainerStarted","Data":"76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.008640 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"561ac33ed996b8d6fa8e1f8c625b22a27f6ed7570f533ae53b874eb0ef017b86"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.009355 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8nwnr" event={"ID":"9bfc6a86-5a16-4814-9de9-f8cf060a966f","Type":"ContainerStarted","Data":"189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.011076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pc8mm" event={"ID":"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b","Type":"ContainerStarted","Data":"0084fc87485ef56b35c25c5e219bf69a2f76206119c26bdde22b98d5c688fb9d"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.011110 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pc8mm" event={"ID":"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b","Type":"ContainerStarted","Data":"bd83e414d50027b34dce6570c3703a1f76787698705d5a933c273df1a8582b50"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.013474 4762 generic.go:334] "Generic (PLEG): container finished" podID="da255bbb-75db-4a07-8547-2bf0794edd04" containerID="cd1aaa171555907cb6e1ce1f31f58f8f7ee8e73b81b246ca49a3a38eafb237de" exitCode=0 Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.013540 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f242d" event={"ID":"da255bbb-75db-4a07-8547-2bf0794edd04","Type":"ContainerDied","Data":"cd1aaa171555907cb6e1ce1f31f58f8f7ee8e73b81b246ca49a3a38eafb237de"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.013560 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f242d" event={"ID":"da255bbb-75db-4a07-8547-2bf0794edd04","Type":"ContainerStarted","Data":"f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.014598 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerStarted","Data":"79837a458ac19a96fcabcdb66ac986f261e849fa5ad607cd4c9586a623c830d4"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.016328 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c616-account-create-update-tvdx2" event={"ID":"e1da1613-f9de-4860-a63a-1ecd85e8f340","Type":"ContainerStarted","Data":"2373b2ef49cbd140ca57248d520cca16a0957a5e4c15ca55d5ae8afc76069539"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.016367 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c616-account-create-update-tvdx2" event={"ID":"e1da1613-f9de-4860-a63a-1ecd85e8f340","Type":"ContainerStarted","Data":"0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.017606 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-82a0-account-create-update-c5g4h" event={"ID":"2844b501-49b8-4b08-adbe-30159ca77f47","Type":"ContainerStarted","Data":"ea0db89a0f0bc007473ef74c878a40bf8348bdcc434b2761a93d24ea0be015f6"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.017646 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-82a0-account-create-update-c5g4h" event={"ID":"2844b501-49b8-4b08-adbe-30159ca77f47","Type":"ContainerStarted","Data":"f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.019460 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mhxgz" event={"ID":"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb","Type":"ContainerStarted","Data":"cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb"} Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.051327 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mhxgz" podStartSLOduration=3.051308756 podStartE2EDuration="3.051308756s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:33.050586003 +0000 UTC m=+1294.524730367" watchObservedRunningTime="2026-03-08 00:44:33.051308756 +0000 UTC m=+1294.525453100" Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.054177 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-pc8mm" podStartSLOduration=3.054161122 podStartE2EDuration="3.054161122s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:33.034594187 +0000 UTC m=+1294.508738531" watchObservedRunningTime="2026-03-08 00:44:33.054161122 +0000 UTC m=+1294.528305466" Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.105602 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-c616-account-create-update-tvdx2" podStartSLOduration=3.105584716 podStartE2EDuration="3.105584716s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:33.073624114 +0000 UTC m=+1294.547768458" watchObservedRunningTime="2026-03-08 00:44:33.105584716 +0000 UTC m=+1294.579729060" Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.105928 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd64-account-create-update-dlhcc"] Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.118894 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-82a0-account-create-update-c5g4h" podStartSLOduration=3.118873391 podStartE2EDuration="3.118873391s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:33.089096344 +0000 UTC m=+1294.563240688" watchObservedRunningTime="2026-03-08 00:44:33.118873391 +0000 UTC m=+1294.593017735" Mar 08 00:44:33 crc kubenswrapper[4762]: W0308 00:44:33.150710 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2595315c_6bb3_4ac1_a860_004cf18c89af.slice/crio-be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d WatchSource:0}: Error finding container be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d: Status 404 returned error can't find the container with id be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.438142 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-kkckg-config-gctn8"] Mar 08 00:44:33 crc kubenswrapper[4762]: I0308 00:44:33.451204 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-kkckg-config-gctn8"] Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.035605 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1da1613-f9de-4860-a63a-1ecd85e8f340" containerID="2373b2ef49cbd140ca57248d520cca16a0957a5e4c15ca55d5ae8afc76069539" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.035691 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c616-account-create-update-tvdx2" event={"ID":"e1da1613-f9de-4860-a63a-1ecd85e8f340","Type":"ContainerDied","Data":"2373b2ef49cbd140ca57248d520cca16a0957a5e4c15ca55d5ae8afc76069539"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.047671 4762 generic.go:334] "Generic (PLEG): container finished" podID="2844b501-49b8-4b08-adbe-30159ca77f47" containerID="ea0db89a0f0bc007473ef74c878a40bf8348bdcc434b2761a93d24ea0be015f6" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.047737 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-82a0-account-create-update-c5g4h" event={"ID":"2844b501-49b8-4b08-adbe-30159ca77f47","Type":"ContainerDied","Data":"ea0db89a0f0bc007473ef74c878a40bf8348bdcc434b2761a93d24ea0be015f6"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.054308 4762 generic.go:334] "Generic (PLEG): container finished" podID="2595315c-6bb3-4ac1-a860-004cf18c89af" containerID="f7ab78916310999b3c6aef47781df670639bd1cab72100e354bd6f7c091bd184" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.054392 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd64-account-create-update-dlhcc" event={"ID":"2595315c-6bb3-4ac1-a860-004cf18c89af","Type":"ContainerDied","Data":"f7ab78916310999b3c6aef47781df670639bd1cab72100e354bd6f7c091bd184"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.054439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd64-account-create-update-dlhcc" event={"ID":"2595315c-6bb3-4ac1-a860-004cf18c89af","Type":"ContainerStarted","Data":"be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.057293 4762 generic.go:334] "Generic (PLEG): container finished" podID="2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" containerID="0084fc87485ef56b35c25c5e219bf69a2f76206119c26bdde22b98d5c688fb9d" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.057339 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pc8mm" event={"ID":"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b","Type":"ContainerDied","Data":"0084fc87485ef56b35c25c5e219bf69a2f76206119c26bdde22b98d5c688fb9d"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.059541 4762 generic.go:334] "Generic (PLEG): container finished" podID="55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" containerID="496ecf81e15894d396c24c98cb6dfb74c150d9b266f9b63618cddcd03fb9aecb" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.059725 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mhxgz" event={"ID":"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb","Type":"ContainerDied","Data":"496ecf81e15894d396c24c98cb6dfb74c150d9b266f9b63618cddcd03fb9aecb"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.061642 4762 generic.go:334] "Generic (PLEG): container finished" podID="ae10ab24-1407-4b27-97ab-3424e4b85a03" containerID="933f6a5aee8c739dc9e13c926930605089d71d02b783a6cf738a30148c7cc5c4" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.061809 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4682-account-create-update-fmc2n" event={"ID":"ae10ab24-1407-4b27-97ab-3424e4b85a03","Type":"ContainerDied","Data":"933f6a5aee8c739dc9e13c926930605089d71d02b783a6cf738a30148c7cc5c4"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.065533 4762 generic.go:334] "Generic (PLEG): container finished" podID="d11fd63c-9a18-4c14-a7fc-68bca559ce0f" containerID="209e3cbf24f25a04b2128c2430344d8c2f085cc3a3b3f5b99eeb6cf7efa8e7c0" exitCode=0 Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.065612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-76c6s" event={"ID":"d11fd63c-9a18-4c14-a7fc-68bca559ce0f","Type":"ContainerDied","Data":"209e3cbf24f25a04b2128c2430344d8c2f085cc3a3b3f5b99eeb6cf7efa8e7c0"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.065643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-76c6s" event={"ID":"d11fd63c-9a18-4c14-a7fc-68bca559ce0f","Type":"ContainerStarted","Data":"56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.068091 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"d7957213f22a66ebe2b83994ac1ad4d71bc4bbb0afdd261ea4ab653391595bec"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.068125 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"a7ed7f7ec9f27c612653b0c1fc5e3982bc66bc60b247761d136ddeba060708c3"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.068153 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"be0ec3a3d132883344e55f4c4c9f03c11bcd983536da5394f735ee62ffc1e5b4"} Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.688095 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwrf4"] Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.702134 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwrf4"] Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.717466 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f242d" Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.799830 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glj99\" (UniqueName: \"kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99\") pod \"da255bbb-75db-4a07-8547-2bf0794edd04\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.799910 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts\") pod \"da255bbb-75db-4a07-8547-2bf0794edd04\" (UID: \"da255bbb-75db-4a07-8547-2bf0794edd04\") " Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.800644 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da255bbb-75db-4a07-8547-2bf0794edd04" (UID: "da255bbb-75db-4a07-8547-2bf0794edd04"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.807917 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99" (OuterVolumeSpecName: "kube-api-access-glj99") pod "da255bbb-75db-4a07-8547-2bf0794edd04" (UID: "da255bbb-75db-4a07-8547-2bf0794edd04"). InnerVolumeSpecName "kube-api-access-glj99". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.902345 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glj99\" (UniqueName: \"kubernetes.io/projected/da255bbb-75db-4a07-8547-2bf0794edd04-kube-api-access-glj99\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:34 crc kubenswrapper[4762]: I0308 00:44:34.902384 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da255bbb-75db-4a07-8547-2bf0794edd04-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.078612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerStarted","Data":"26fc7fc3c75c658d178b2587c5ff0dade7f34b77a1904e8689d717a320775594"} Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.080342 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-f242d" event={"ID":"da255bbb-75db-4a07-8547-2bf0794edd04","Type":"ContainerDied","Data":"f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c"} Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.080399 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8418fbe2f1365e3ad5b8cc6e1a3ec1d1f7e036794c2460100b750466f08069c" Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.080440 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-f242d" Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.274136 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c13643-eddc-4b3d-a256-09bb2d876192" path="/var/lib/kubelet/pods/19c13643-eddc-4b3d-a256-09bb2d876192/volumes" Mar 08 00:44:35 crc kubenswrapper[4762]: I0308 00:44:35.274678 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="453668f3-2ec0-4f51-a433-47426565e055" path="/var/lib/kubelet/pods/453668f3-2ec0-4f51-a433-47426565e055/volumes" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.087935 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.122551 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.135697 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.150613 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-82a0-account-create-update-c5g4h" event={"ID":"2844b501-49b8-4b08-adbe-30159ca77f47","Type":"ContainerDied","Data":"f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.151266 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f093b935f6ab85c7b2f034c225e1f3959d7d66ee2bba4ea2e358764d74a1bfe4" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.151281 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.150646 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-82a0-account-create-update-c5g4h" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.155482 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd64-account-create-update-dlhcc" event={"ID":"2595315c-6bb3-4ac1-a860-004cf18c89af","Type":"ContainerDied","Data":"be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.156338 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be2b65e45a5101f02894529c77e108b1f9bfd884e254450d0e8fdb358a8bd67d" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.157624 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.158729 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-pc8mm" event={"ID":"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b","Type":"ContainerDied","Data":"bd83e414d50027b34dce6570c3703a1f76787698705d5a933c273df1a8582b50"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.158786 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd83e414d50027b34dce6570c3703a1f76787698705d5a933c273df1a8582b50" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.158867 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-pc8mm" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.163037 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mhxgz" event={"ID":"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb","Type":"ContainerDied","Data":"cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.163166 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfefc39adb72197f0e85745836f697fdddc43beaec950412e358d750e41ba5cb" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.165818 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts\") pod \"ae10ab24-1407-4b27-97ab-3424e4b85a03\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.166151 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr\") pod \"2844b501-49b8-4b08-adbe-30159ca77f47\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.166365 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts\") pod \"2844b501-49b8-4b08-adbe-30159ca77f47\" (UID: \"2844b501-49b8-4b08-adbe-30159ca77f47\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.166609 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf564\" (UniqueName: \"kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564\") pod \"ae10ab24-1407-4b27-97ab-3424e4b85a03\" (UID: \"ae10ab24-1407-4b27-97ab-3424e4b85a03\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.167030 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4682-account-create-update-fmc2n" event={"ID":"ae10ab24-1407-4b27-97ab-3424e4b85a03","Type":"ContainerDied","Data":"76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.167074 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76792567ade8139760cf9827fc254b2555c456f0fa9ce4cac48823280d114e16" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.167138 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4682-account-create-update-fmc2n" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.168391 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2844b501-49b8-4b08-adbe-30159ca77f47" (UID: "2844b501-49b8-4b08-adbe-30159ca77f47"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.168541 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae10ab24-1407-4b27-97ab-3424e4b85a03" (UID: "ae10ab24-1407-4b27-97ab-3424e4b85a03"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.170902 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-76c6s" event={"ID":"d11fd63c-9a18-4c14-a7fc-68bca559ce0f","Type":"ContainerDied","Data":"56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.170934 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56865f175da8d169aef0959a781b240de99d01a4c19b62c46d0e2aa935db1f3a" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.171586 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.173123 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-c616-account-create-update-tvdx2" event={"ID":"e1da1613-f9de-4860-a63a-1ecd85e8f340","Type":"ContainerDied","Data":"0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f"} Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.173149 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0573b115c237b2c3929d7de92e4f6f2a500bca4799b48d78730d070d7cef676f" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.173180 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-c616-account-create-update-tvdx2" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.175807 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae10ab24-1407-4b27-97ab-3424e4b85a03-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.175930 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2844b501-49b8-4b08-adbe-30159ca77f47-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.176887 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564" (OuterVolumeSpecName: "kube-api-access-wf564") pod "ae10ab24-1407-4b27-97ab-3424e4b85a03" (UID: "ae10ab24-1407-4b27-97ab-3424e4b85a03"). InnerVolumeSpecName "kube-api-access-wf564". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.181036 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr" (OuterVolumeSpecName: "kube-api-access-fpngr") pod "2844b501-49b8-4b08-adbe-30159ca77f47" (UID: "2844b501-49b8-4b08-adbe-30159ca77f47"). InnerVolumeSpecName "kube-api-access-fpngr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.247732 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312061 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts\") pod \"2595315c-6bb3-4ac1-a860-004cf18c89af\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312151 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts\") pod \"e1da1613-f9de-4860-a63a-1ecd85e8f340\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312222 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts\") pod \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312268 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9jgzs\" (UniqueName: \"kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs\") pod \"2595315c-6bb3-4ac1-a860-004cf18c89af\" (UID: \"2595315c-6bb3-4ac1-a860-004cf18c89af\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312299 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m6cx\" (UniqueName: \"kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx\") pod \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312676 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2595315c-6bb3-4ac1-a860-004cf18c89af" (UID: "2595315c-6bb3-4ac1-a860-004cf18c89af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312706 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d11fd63c-9a18-4c14-a7fc-68bca559ce0f" (UID: "d11fd63c-9a18-4c14-a7fc-68bca559ce0f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312682 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1da1613-f9de-4860-a63a-1ecd85e8f340" (UID: "e1da1613-f9de-4860-a63a-1ecd85e8f340"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312860 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sj44\" (UniqueName: \"kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44\") pod \"e1da1613-f9de-4860-a63a-1ecd85e8f340\" (UID: \"e1da1613-f9de-4860-a63a-1ecd85e8f340\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.312937 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s56x5\" (UniqueName: \"kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5\") pod \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\" (UID: \"d11fd63c-9a18-4c14-a7fc-68bca559ce0f\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.313310 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts\") pod \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\" (UID: \"2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314061 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpngr\" (UniqueName: \"kubernetes.io/projected/2844b501-49b8-4b08-adbe-30159ca77f47-kube-api-access-fpngr\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314091 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2595315c-6bb3-4ac1-a860-004cf18c89af-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314102 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1da1613-f9de-4860-a63a-1ecd85e8f340-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314114 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf564\" (UniqueName: \"kubernetes.io/projected/ae10ab24-1407-4b27-97ab-3424e4b85a03-kube-api-access-wf564\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314124 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.314908 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" (UID: "2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.317336 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs" (OuterVolumeSpecName: "kube-api-access-9jgzs") pod "2595315c-6bb3-4ac1-a860-004cf18c89af" (UID: "2595315c-6bb3-4ac1-a860-004cf18c89af"). InnerVolumeSpecName "kube-api-access-9jgzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.317457 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44" (OuterVolumeSpecName: "kube-api-access-6sj44") pod "e1da1613-f9de-4860-a63a-1ecd85e8f340" (UID: "e1da1613-f9de-4860-a63a-1ecd85e8f340"). InnerVolumeSpecName "kube-api-access-6sj44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.317860 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx" (OuterVolumeSpecName: "kube-api-access-7m6cx") pod "2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" (UID: "2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b"). InnerVolumeSpecName "kube-api-access-7m6cx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.320086 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5" (OuterVolumeSpecName: "kube-api-access-s56x5") pod "d11fd63c-9a18-4c14-a7fc-68bca559ce0f" (UID: "d11fd63c-9a18-4c14-a7fc-68bca559ce0f"). InnerVolumeSpecName "kube-api-access-s56x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.415649 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts\") pod \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.415715 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjgr9\" (UniqueName: \"kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9\") pod \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\" (UID: \"55ac7177-d14a-4b66-bab1-d8de8b6d8bdb\") " Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.416314 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sj44\" (UniqueName: \"kubernetes.io/projected/e1da1613-f9de-4860-a63a-1ecd85e8f340-kube-api-access-6sj44\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.416329 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s56x5\" (UniqueName: \"kubernetes.io/projected/d11fd63c-9a18-4c14-a7fc-68bca559ce0f-kube-api-access-s56x5\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.416338 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.416346 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9jgzs\" (UniqueName: \"kubernetes.io/projected/2595315c-6bb3-4ac1-a860-004cf18c89af-kube-api-access-9jgzs\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.416354 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m6cx\" (UniqueName: \"kubernetes.io/projected/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b-kube-api-access-7m6cx\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.417019 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" (UID: "55ac7177-d14a-4b66-bab1-d8de8b6d8bdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.419137 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9" (OuterVolumeSpecName: "kube-api-access-jjgr9") pod "55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" (UID: "55ac7177-d14a-4b66-bab1-d8de8b6d8bdb"). InnerVolumeSpecName "kube-api-access-jjgr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.517644 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:38 crc kubenswrapper[4762]: I0308 00:44:38.517687 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjgr9\" (UniqueName: \"kubernetes.io/projected/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb-kube-api-access-jjgr9\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.207439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8nwnr" event={"ID":"9bfc6a86-5a16-4814-9de9-f8cf060a966f","Type":"ContainerStarted","Data":"99f1d044c6a46129a53cc10de3a238248dd42ba5b8b5e6f383e41e657e6459ee"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.218837 4762 generic.go:334] "Generic (PLEG): container finished" podID="ff70ef09-d101-4c3f-8a03-95b5fbe0b250" containerID="17036cd87a89710f741f57d9c87eb414a74f9dcd46354fcc280150141a4acc8a" exitCode=0 Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.218929 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5mfsp" event={"ID":"ff70ef09-d101-4c3f-8a03-95b5fbe0b250","Type":"ContainerDied","Data":"17036cd87a89710f741f57d9c87eb414a74f9dcd46354fcc280150141a4acc8a"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232780 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd64-account-create-update-dlhcc" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232826 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mhxgz" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232863 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"19e5266f17063c0412857dffa9c6b282bcd70e13c1ca3c487f63dfa94d83ab6e"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232888 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"f4f2c102577e248eec958201ae35d9e38c18085b23991e630b236ed6029ace6e"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232903 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"03c7dbf2d4228dfa3aeaa0e8d6f8fe55cdb56e4494f8510a436602c1751bb6d3"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.232913 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"dd45b13c14fdf842ec60099aca2c2c9c3789db959e7c4abd939c1596a70437fe"} Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.233023 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-76c6s" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.253816 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-8nwnr" podStartSLOduration=3.991809572 podStartE2EDuration="9.253739551s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="2026-03-08 00:44:32.682203269 +0000 UTC m=+1294.156347603" lastFinishedPulling="2026-03-08 00:44:37.944133238 +0000 UTC m=+1299.418277582" observedRunningTime="2026-03-08 00:44:39.232228617 +0000 UTC m=+1300.706372951" watchObservedRunningTime="2026-03-08 00:44:39.253739551 +0000 UTC m=+1300.727883895" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.708414 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4jv7v"] Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709088 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="453668f3-2ec0-4f51-a433-47426565e055" containerName="ovn-config" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709107 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="453668f3-2ec0-4f51-a433-47426565e055" containerName="ovn-config" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709130 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d11fd63c-9a18-4c14-a7fc-68bca559ce0f" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709136 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d11fd63c-9a18-4c14-a7fc-68bca559ce0f" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709143 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da255bbb-75db-4a07-8547-2bf0794edd04" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709150 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="da255bbb-75db-4a07-8547-2bf0794edd04" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709163 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709169 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709179 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1da1613-f9de-4860-a63a-1ecd85e8f340" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709185 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1da1613-f9de-4860-a63a-1ecd85e8f340" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709194 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae10ab24-1407-4b27-97ab-3424e4b85a03" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709200 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae10ab24-1407-4b27-97ab-3424e4b85a03" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709212 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2844b501-49b8-4b08-adbe-30159ca77f47" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709218 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2844b501-49b8-4b08-adbe-30159ca77f47" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709228 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2595315c-6bb3-4ac1-a860-004cf18c89af" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709234 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2595315c-6bb3-4ac1-a860-004cf18c89af" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: E0308 00:44:39.709248 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709253 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709428 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="da255bbb-75db-4a07-8547-2bf0794edd04" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709438 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2595315c-6bb3-4ac1-a860-004cf18c89af" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709447 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae10ab24-1407-4b27-97ab-3424e4b85a03" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709458 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1da1613-f9de-4860-a63a-1ecd85e8f340" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709470 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="453668f3-2ec0-4f51-a433-47426565e055" containerName="ovn-config" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709479 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709488 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709496 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2844b501-49b8-4b08-adbe-30159ca77f47" containerName="mariadb-account-create-update" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.709505 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d11fd63c-9a18-4c14-a7fc-68bca559ce0f" containerName="mariadb-database-create" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.710182 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.716096 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.718423 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4jv7v"] Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.845422 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.845498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtcwb\" (UniqueName: \"kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.947218 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.947289 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtcwb\" (UniqueName: \"kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.949282 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:39 crc kubenswrapper[4762]: I0308 00:44:39.968981 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtcwb\" (UniqueName: \"kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb\") pod \"root-account-create-update-4jv7v\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.038538 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.262167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"3b896d6075c71ff6553fd9dcdb32cb8020bcb97da13ef8866a907fb92361cabd"} Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.262803 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"e848f5d021fce466f3a97bde778dbb326c3bb36190bcf1577ff3ef35a00ad040"} Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.612855 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4jv7v"] Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.702202 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.889577 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data\") pod \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.889628 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data\") pod \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.889658 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbtqd\" (UniqueName: \"kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd\") pod \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.889730 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle\") pod \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\" (UID: \"ff70ef09-d101-4c3f-8a03-95b5fbe0b250\") " Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.893465 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ff70ef09-d101-4c3f-8a03-95b5fbe0b250" (UID: "ff70ef09-d101-4c3f-8a03-95b5fbe0b250"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.898952 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd" (OuterVolumeSpecName: "kube-api-access-bbtqd") pod "ff70ef09-d101-4c3f-8a03-95b5fbe0b250" (UID: "ff70ef09-d101-4c3f-8a03-95b5fbe0b250"). InnerVolumeSpecName "kube-api-access-bbtqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.919891 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff70ef09-d101-4c3f-8a03-95b5fbe0b250" (UID: "ff70ef09-d101-4c3f-8a03-95b5fbe0b250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.961891 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data" (OuterVolumeSpecName: "config-data") pod "ff70ef09-d101-4c3f-8a03-95b5fbe0b250" (UID: "ff70ef09-d101-4c3f-8a03-95b5fbe0b250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.993229 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbtqd\" (UniqueName: \"kubernetes.io/projected/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-kube-api-access-bbtqd\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.993716 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.993836 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:40 crc kubenswrapper[4762]: I0308 00:44:40.993914 4762 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ff70ef09-d101-4c3f-8a03-95b5fbe0b250-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.277203 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"3b9d98ea570edc953c24b2f066a071cc8deb076b985765faec8308618369383d"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.277253 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"09130b7543620f75db8f3f2dbed81275cef7e6a913bc98bd715bd0f1d30ea130"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.277268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"0fb70102eb3629c2fb58bec2e3af3a4c39539f1a9e89b340b4644fe1ec6f09af"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.280387 4762 generic.go:334] "Generic (PLEG): container finished" podID="9bfc6a86-5a16-4814-9de9-f8cf060a966f" containerID="99f1d044c6a46129a53cc10de3a238248dd42ba5b8b5e6f383e41e657e6459ee" exitCode=0 Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.280459 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8nwnr" event={"ID":"9bfc6a86-5a16-4814-9de9-f8cf060a966f","Type":"ContainerDied","Data":"99f1d044c6a46129a53cc10de3a238248dd42ba5b8b5e6f383e41e657e6459ee"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.284131 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4jv7v" event={"ID":"69d92be0-6a23-4bc5-93b5-342f087356be","Type":"ContainerStarted","Data":"ba69203ae73ee6dc75bd92fac80922f446ecb8c2b18a378f6e08a85950a61b04"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.284176 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4jv7v" event={"ID":"69d92be0-6a23-4bc5-93b5-342f087356be","Type":"ContainerStarted","Data":"9d4b57064f7b7b67cbd23fa34ee65f7e6f0f5502dec6cecd31a839c718ad3b47"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.286585 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5mfsp" event={"ID":"ff70ef09-d101-4c3f-8a03-95b5fbe0b250","Type":"ContainerDied","Data":"153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3"} Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.286668 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="153ee8047bac8c4cb75319c652e9962e91b255f33746cea05442afa92e258dc3" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.286773 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5mfsp" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.336855 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4jv7v" podStartSLOduration=2.336838652 podStartE2EDuration="2.336838652s" podCreationTimestamp="2026-03-08 00:44:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:41.308071908 +0000 UTC m=+1302.782216252" watchObservedRunningTime="2026-03-08 00:44:41.336838652 +0000 UTC m=+1302.810982986" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.562083 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:41 crc kubenswrapper[4762]: E0308 00:44:41.562662 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff70ef09-d101-4c3f-8a03-95b5fbe0b250" containerName="glance-db-sync" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.562677 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff70ef09-d101-4c3f-8a03-95b5fbe0b250" containerName="glance-db-sync" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.569394 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff70ef09-d101-4c3f-8a03-95b5fbe0b250" containerName="glance-db-sync" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.570343 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.584982 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.706131 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.706206 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.706334 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.706373 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.706438 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpnrn\" (UniqueName: \"kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.808206 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpnrn\" (UniqueName: \"kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.808269 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.808306 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.808406 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.808454 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.809349 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.809373 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.809381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.809546 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.827647 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpnrn\" (UniqueName: \"kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn\") pod \"dnsmasq-dns-74dc88fc-twntd\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:41 crc kubenswrapper[4762]: I0308 00:44:41.891926 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.302837 4762 generic.go:334] "Generic (PLEG): container finished" podID="8103d22d-043e-4af1-a19d-307905e2a05f" containerID="26fc7fc3c75c658d178b2587c5ff0dade7f34b77a1904e8689d717a320775594" exitCode=0 Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.302970 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerDied","Data":"26fc7fc3c75c658d178b2587c5ff0dade7f34b77a1904e8689d717a320775594"} Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.351242 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"26468edf0af6639773dc9c0c8a8d2a68b2a722711b3278e362ff99a0a7e8d095"} Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.351289 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"eb5158d2-f742-4eef-8c66-f2db685aeb9e","Type":"ContainerStarted","Data":"b6ee3cc7741a7e38f25755ff32b7fe08fb9f2e793fe75d3c3a33f77882d3f645"} Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.356015 4762 generic.go:334] "Generic (PLEG): container finished" podID="69d92be0-6a23-4bc5-93b5-342f087356be" containerID="ba69203ae73ee6dc75bd92fac80922f446ecb8c2b18a378f6e08a85950a61b04" exitCode=0 Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.356309 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4jv7v" event={"ID":"69d92be0-6a23-4bc5-93b5-342f087356be","Type":"ContainerDied","Data":"ba69203ae73ee6dc75bd92fac80922f446ecb8c2b18a378f6e08a85950a61b04"} Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.404458 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.956459334 podStartE2EDuration="49.404434905s" podCreationTimestamp="2026-03-08 00:43:53 +0000 UTC" firstStartedPulling="2026-03-08 00:44:29.279890192 +0000 UTC m=+1290.754034536" lastFinishedPulling="2026-03-08 00:44:39.727865753 +0000 UTC m=+1301.202010107" observedRunningTime="2026-03-08 00:44:42.389335945 +0000 UTC m=+1303.863480289" watchObservedRunningTime="2026-03-08 00:44:42.404434905 +0000 UTC m=+1303.878579269" Mar 08 00:44:42 crc kubenswrapper[4762]: W0308 00:44:42.421492 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd12ba102_382c_4b30_a18e_8b94e7879453.slice/crio-83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f WatchSource:0}: Error finding container 83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f: Status 404 returned error can't find the container with id 83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.422910 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.664131 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.691056 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.692709 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.698975 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.716837 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.722867 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.852003 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.852363 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.873652 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2p4p\" (UniqueName: \"kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p\") pod \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.873732 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle\") pod \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.873935 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data\") pod \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\" (UID: \"9bfc6a86-5a16-4814-9de9-f8cf060a966f\") " Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874189 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874216 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874237 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874276 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874338 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.874378 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfd8l\" (UniqueName: \"kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.877817 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p" (OuterVolumeSpecName: "kube-api-access-z2p4p") pod "9bfc6a86-5a16-4814-9de9-f8cf060a966f" (UID: "9bfc6a86-5a16-4814-9de9-f8cf060a966f"). InnerVolumeSpecName "kube-api-access-z2p4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.897685 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9bfc6a86-5a16-4814-9de9-f8cf060a966f" (UID: "9bfc6a86-5a16-4814-9de9-f8cf060a966f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.917156 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data" (OuterVolumeSpecName: "config-data") pod "9bfc6a86-5a16-4814-9de9-f8cf060a966f" (UID: "9bfc6a86-5a16-4814-9de9-f8cf060a966f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976415 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976487 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976530 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976610 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976736 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976850 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfd8l\" (UniqueName: \"kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976965 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.976992 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2p4p\" (UniqueName: \"kubernetes.io/projected/9bfc6a86-5a16-4814-9de9-f8cf060a966f-kube-api-access-z2p4p\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.977012 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9bfc6a86-5a16-4814-9de9-f8cf060a966f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.977259 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.977832 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.978388 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.978918 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.978916 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:42 crc kubenswrapper[4762]: I0308 00:44:42.997208 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfd8l\" (UniqueName: \"kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l\") pod \"dnsmasq-dns-5f59b8f679-rqhpw\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.020346 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.373722 4762 generic.go:334] "Generic (PLEG): container finished" podID="d12ba102-382c-4b30-a18e-8b94e7879453" containerID="5bc0148767a036e46719269d2969ea316d93c9670ce8be1ee742bb2df15ab7fd" exitCode=0 Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.374128 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-twntd" event={"ID":"d12ba102-382c-4b30-a18e-8b94e7879453","Type":"ContainerDied","Data":"5bc0148767a036e46719269d2969ea316d93c9670ce8be1ee742bb2df15ab7fd"} Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.374166 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-twntd" event={"ID":"d12ba102-382c-4b30-a18e-8b94e7879453","Type":"ContainerStarted","Data":"83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f"} Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.402105 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerStarted","Data":"b3882252dbe7b53aa0df27b4c2b5c810356550f9098bea52ec83f6f21e31fb15"} Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.412436 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-8nwnr" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.412994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-8nwnr" event={"ID":"9bfc6a86-5a16-4814-9de9-f8cf060a966f","Type":"ContainerDied","Data":"189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b"} Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.413032 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189540fc02eac74bb1de542820a1a49e0407e9fff3f4200d3d928afffe7d6f6b" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.479299 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.532550 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-chxth"] Mar 08 00:44:43 crc kubenswrapper[4762]: E0308 00:44:43.533216 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bfc6a86-5a16-4814-9de9-f8cf060a966f" containerName="keystone-db-sync" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.533233 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bfc6a86-5a16-4814-9de9-f8cf060a966f" containerName="keystone-db-sync" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.533441 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bfc6a86-5a16-4814-9de9-f8cf060a966f" containerName="keystone-db-sync" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.534099 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.538591 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.538639 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rwwq6" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.539092 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.539312 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.539479 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.554340 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.582926 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-chxth"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.614048 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.625938 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.670774 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710127 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710410 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710442 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710468 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710501 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710527 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pvq\" (UniqueName: \"kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710564 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710588 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710619 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710662 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710719 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhfj5\" (UniqueName: \"kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.710744 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.762903 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-z4g9j"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.764368 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.767490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-zt4lp" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.767799 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.814994 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815239 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815359 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815468 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhfj5\" (UniqueName: \"kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815545 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815639 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815808 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.815916 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816008 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816102 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6m4g\" (UniqueName: \"kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816180 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816253 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pvq\" (UniqueName: \"kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816342 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.816417 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.817234 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.818437 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.826070 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.826714 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.829014 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.829360 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.829408 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-z4g9j"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.838506 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.839399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.847194 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.858681 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhfj5\" (UniqueName: \"kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5\") pod \"dnsmasq-dns-bbf5cc879-fdrt4\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.858683 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pvq\" (UniqueName: \"kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.859035 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys\") pod \"keystone-bootstrap-chxth\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.912822 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-p8wzm"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.914056 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.919174 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.919420 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pp5qm" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.919538 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.922884 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6m4g\" (UniqueName: \"kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.923118 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.923361 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.923985 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.928432 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.931291 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.942501 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-pxw9p"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.943681 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.944498 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.947792 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6m4g\" (UniqueName: \"kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g\") pod \"heat-db-sync-z4g9j\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.955240 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fnlb6" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.955430 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.956430 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.972303 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p8wzm"] Mar 08 00:44:43 crc kubenswrapper[4762]: I0308 00:44:43.986372 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pxw9p"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.003730 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-p9m92"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.004963 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.010225 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.010410 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2lct7" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.010551 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027690 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027769 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027802 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q5gj\" (UniqueName: \"kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027828 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027858 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027915 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027955 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027971 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.027995 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbcrw\" (UniqueName: \"kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.028080 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-w6rs6"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.029173 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.032554 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.035419 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l8dlj" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.065801 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-p9m92"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.092801 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-w6rs6"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.105997 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z4g9j" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.134654 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.134909 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q5gj\" (UniqueName: \"kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135001 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135089 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135235 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcrq\" (UniqueName: \"kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135354 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2x2t\" (UniqueName: \"kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135450 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135554 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135670 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135743 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135831 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135928 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.135996 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbcrw\" (UniqueName: \"kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.136287 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.136359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.136434 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.136505 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.144256 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.144427 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.148343 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.150586 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.153115 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.160675 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.165586 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.165667 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.167337 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.173099 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.178885 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q5gj\" (UniqueName: \"kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj\") pod \"neutron-db-sync-p8wzm\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.183434 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.199682 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbcrw\" (UniqueName: \"kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw\") pod \"cinder-db-sync-pxw9p\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.221821 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.224108 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.235135 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.235532 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.235721 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.238930 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.238987 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239064 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239097 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcrq\" (UniqueName: \"kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239122 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2x2t\" (UniqueName: \"kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239185 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239227 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239250 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239279 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239305 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239337 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239365 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lshgb\" (UniqueName: \"kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239397 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.239427 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.240003 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.252303 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.253866 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.269157 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.282384 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.283913 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcrq\" (UniqueName: \"kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq\") pod \"barbican-db-sync-w6rs6\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.295350 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2x2t\" (UniqueName: \"kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.304480 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data\") pod \"placement-db-sync-p9m92\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346391 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lshgb\" (UniqueName: \"kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346638 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346677 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346744 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346771 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wql\" (UniqueName: \"kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346823 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346845 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346895 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346917 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346947 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.346966 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.347044 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.347078 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.348499 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.349385 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.353499 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.359060 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.359301 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.386670 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lshgb\" (UniqueName: \"kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb\") pod \"dnsmasq-dns-56df8fb6b7-mrfsn\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452128 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452430 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452557 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452655 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452821 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.452938 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.453095 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wql\" (UniqueName: \"kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.459431 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.478127 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.479439 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.479741 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.481010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wql\" (UniqueName: \"kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.481895 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.484343 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74dc88fc-twntd" event={"ID":"d12ba102-382c-4b30-a18e-8b94e7879453","Type":"ContainerDied","Data":"83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f"} Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.484377 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83117d811dd860830ffd1e2365bd2a9199c49a35fed60ea37d4ae8def51f772f" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.487146 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4jv7v" event={"ID":"69d92be0-6a23-4bc5-93b5-342f087356be","Type":"ContainerDied","Data":"9d4b57064f7b7b67cbd23fa34ee65f7e6f0f5502dec6cecd31a839c718ad3b47"} Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.487190 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d4b57064f7b7b67cbd23fa34ee65f7e6f0f5502dec6cecd31a839c718ad3b47" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.488306 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data\") pod \"ceilometer-0\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.489062 4762 generic.go:334] "Generic (PLEG): container finished" podID="2c63252b-7c30-4197-b34a-9870713af320" containerID="cc0b65259d76d471d103f856051ff13becd6b17ad16c2d79aa9ead075ad30659" exitCode=0 Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.489174 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" event={"ID":"2c63252b-7c30-4197-b34a-9870713af320","Type":"ContainerDied","Data":"cc0b65259d76d471d103f856051ff13becd6b17ad16c2d79aa9ead075ad30659"} Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.489268 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" event={"ID":"2c63252b-7c30-4197-b34a-9870713af320","Type":"ContainerStarted","Data":"73e929db7ee0d1732646c204a03128717cdcaf84d3a0dd2dd46b20af45e44489"} Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.555040 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.583935 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.598248 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-p9m92" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.616540 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.638947 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.649864 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.659005 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.671680 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764622 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtcwb\" (UniqueName: \"kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb\") pod \"69d92be0-6a23-4bc5-93b5-342f087356be\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764679 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb\") pod \"d12ba102-382c-4b30-a18e-8b94e7879453\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764752 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb\") pod \"d12ba102-382c-4b30-a18e-8b94e7879453\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764817 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc\") pod \"d12ba102-382c-4b30-a18e-8b94e7879453\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764859 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpnrn\" (UniqueName: \"kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn\") pod \"d12ba102-382c-4b30-a18e-8b94e7879453\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764882 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config\") pod \"d12ba102-382c-4b30-a18e-8b94e7879453\" (UID: \"d12ba102-382c-4b30-a18e-8b94e7879453\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.764913 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts\") pod \"69d92be0-6a23-4bc5-93b5-342f087356be\" (UID: \"69d92be0-6a23-4bc5-93b5-342f087356be\") " Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.769483 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "69d92be0-6a23-4bc5-93b5-342f087356be" (UID: "69d92be0-6a23-4bc5-93b5-342f087356be"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.775250 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-chxth"] Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.779122 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb" (OuterVolumeSpecName: "kube-api-access-dtcwb") pod "69d92be0-6a23-4bc5-93b5-342f087356be" (UID: "69d92be0-6a23-4bc5-93b5-342f087356be"). InnerVolumeSpecName "kube-api-access-dtcwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.788122 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn" (OuterVolumeSpecName: "kube-api-access-cpnrn") pod "d12ba102-382c-4b30-a18e-8b94e7879453" (UID: "d12ba102-382c-4b30-a18e-8b94e7879453"). InnerVolumeSpecName "kube-api-access-cpnrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.828064 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d12ba102-382c-4b30-a18e-8b94e7879453" (UID: "d12ba102-382c-4b30-a18e-8b94e7879453"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.829110 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d12ba102-382c-4b30-a18e-8b94e7879453" (UID: "d12ba102-382c-4b30-a18e-8b94e7879453"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.839628 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config" (OuterVolumeSpecName: "config") pod "d12ba102-382c-4b30-a18e-8b94e7879453" (UID: "d12ba102-382c-4b30-a18e-8b94e7879453"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.854048 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d12ba102-382c-4b30-a18e-8b94e7879453" (UID: "d12ba102-382c-4b30-a18e-8b94e7879453"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867558 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtcwb\" (UniqueName: \"kubernetes.io/projected/69d92be0-6a23-4bc5-93b5-342f087356be-kube-api-access-dtcwb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867592 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867601 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867612 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867622 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpnrn\" (UniqueName: \"kubernetes.io/projected/d12ba102-382c-4b30-a18e-8b94e7879453-kube-api-access-cpnrn\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867631 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d12ba102-382c-4b30-a18e-8b94e7879453-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:44 crc kubenswrapper[4762]: I0308 00:44:44.867639 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/69d92be0-6a23-4bc5-93b5-342f087356be-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.056999 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-z4g9j"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.065312 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.261170 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.376864 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.376918 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.376972 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.376997 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfd8l\" (UniqueName: \"kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.377040 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.377265 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb\") pod \"2c63252b-7c30-4197-b34a-9870713af320\" (UID: \"2c63252b-7c30-4197-b34a-9870713af320\") " Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.384576 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l" (OuterVolumeSpecName: "kube-api-access-tfd8l") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "kube-api-access-tfd8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.411280 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.412188 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.418519 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config" (OuterVolumeSpecName: "config") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.419054 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.422575 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c63252b-7c30-4197-b34a-9870713af320" (UID: "2c63252b-7c30-4197-b34a-9870713af320"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480602 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480631 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480640 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480649 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480659 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfd8l\" (UniqueName: \"kubernetes.io/projected/2c63252b-7c30-4197-b34a-9870713af320-kube-api-access-tfd8l\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.480679 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c63252b-7c30-4197-b34a-9870713af320-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.507451 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-w6rs6"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.536658 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-p9m92"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.541054 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" event={"ID":"28304045-4be1-47ec-99d2-f77171370750","Type":"ContainerStarted","Data":"00688f44d7e30f6f69408c29e03d6071e01209cd6d9fc6fbc6ec99af8392e439"} Mar 08 00:44:45 crc kubenswrapper[4762]: W0308 00:44:45.542540 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2613b509_c9d0_4a4b_99c0_11c8c9a0e891.slice/crio-658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce WatchSource:0}: Error finding container 658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce: Status 404 returned error can't find the container with id 658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce Mar 08 00:44:45 crc kubenswrapper[4762]: W0308 00:44:45.549495 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8511806b_d3fb_48df_8348_33f84645e2a3.slice/crio-7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878 WatchSource:0}: Error finding container 7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878: Status 404 returned error can't find the container with id 7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878 Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.549885 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.549885 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rqhpw" event={"ID":"2c63252b-7c30-4197-b34a-9870713af320","Type":"ContainerDied","Data":"73e929db7ee0d1732646c204a03128717cdcaf84d3a0dd2dd46b20af45e44489"} Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.549998 4762 scope.go:117] "RemoveContainer" containerID="cc0b65259d76d471d103f856051ff13becd6b17ad16c2d79aa9ead075ad30659" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.556603 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-pxw9p"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.558169 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z4g9j" event={"ID":"4992e7da-9de7-4354-a35f-a68f8bd0013a","Type":"ContainerStarted","Data":"6e26c685d72544e5d4b3f4de972bf580974b5766b12ea43f9bd8ab3ecd042b7a"} Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.570028 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-p8wzm"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.575441 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerStarted","Data":"cfff4d11deb6c006eae3699f00739a7fbd47161fcf44b41f19263e2f43420e75"} Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.586939 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74dc88fc-twntd" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.588348 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chxth" event={"ID":"1b8c1e48-ff0b-44f1-9b7b-21aec7963747","Type":"ContainerStarted","Data":"9981db043aa2dde646fea769431ba4acc80349a1d84fbadfe09ccb3c34c5620e"} Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.588392 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chxth" event={"ID":"1b8c1e48-ff0b-44f1-9b7b-21aec7963747","Type":"ContainerStarted","Data":"122116420cad226a70354e80c859567d0fc6a36312d9609ef0d03eac37a7b6ad"} Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.588446 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4jv7v" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.609821 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.630885 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.633518 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-chxth" podStartSLOduration=2.633506292 podStartE2EDuration="2.633506292s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:45.618589019 +0000 UTC m=+1307.092733363" watchObservedRunningTime="2026-03-08 00:44:45.633506292 +0000 UTC m=+1307.107650636" Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.672363 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.680075 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74dc88fc-twntd"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.733992 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:45 crc kubenswrapper[4762]: I0308 00:44:45.745388 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rqhpw"] Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.607832 4762 generic.go:334] "Generic (PLEG): container finished" podID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerID="8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6" exitCode=0 Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.607963 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" event={"ID":"ab202f58-df7d-49ee-bf13-116fee0dc87c","Type":"ContainerDied","Data":"8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.608356 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" event={"ID":"ab202f58-df7d-49ee-bf13-116fee0dc87c","Type":"ContainerStarted","Data":"46e7bcee59d2f0df686c03c0cc13c19110dc0c75ebdd450074903834df172dde"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.613134 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerStarted","Data":"3a604f31656bf300a0bb6b9e39f92934314c59b6401afce47b47133cd151ffb3"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.626771 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8103d22d-043e-4af1-a19d-307905e2a05f","Type":"ContainerStarted","Data":"7ab395902f4bda684a503673a73b31c141869d586d833630c4b4ae91f3d89f8a"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.630390 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p8wzm" event={"ID":"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997","Type":"ContainerStarted","Data":"131af0bb5a04281b09939b68e91fbaedcdc8412955b11584f37b21d49d4b280b"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.630470 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p8wzm" event={"ID":"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997","Type":"ContainerStarted","Data":"312b5f0949d73f689c67b2b8b48c7397ee7a8911fd3ddc79cce71c7a2866ef59"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.639234 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-p9m92" event={"ID":"2613b509-c9d0-4a4b-99c0-11c8c9a0e891","Type":"ContainerStarted","Data":"658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.663951 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.663934464 podStartE2EDuration="16.663934464s" podCreationTimestamp="2026-03-08 00:44:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:46.658930552 +0000 UTC m=+1308.133074896" watchObservedRunningTime="2026-03-08 00:44:46.663934464 +0000 UTC m=+1308.138078808" Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.664074 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pxw9p" event={"ID":"8511806b-d3fb-48df-8348-33f84645e2a3","Type":"ContainerStarted","Data":"7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.671415 4762 generic.go:334] "Generic (PLEG): container finished" podID="28304045-4be1-47ec-99d2-f77171370750" containerID="7816c9bba222ef1c45653f6c51f237f7e59a89f619327a8db5c3bc2d7ed32b26" exitCode=0 Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.671503 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" event={"ID":"28304045-4be1-47ec-99d2-f77171370750","Type":"ContainerDied","Data":"7816c9bba222ef1c45653f6c51f237f7e59a89f619327a8db5c3bc2d7ed32b26"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.678145 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-w6rs6" event={"ID":"6898c30b-2e0c-4062-b5f2-70aa22bb5139","Type":"ContainerStarted","Data":"dd703df2e994e5bcf3195b4e129b52683ff187ed801c5f19cb734356d8cfca96"} Mar 08 00:44:46 crc kubenswrapper[4762]: I0308 00:44:46.701526 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-p8wzm" podStartSLOduration=3.701504807 podStartE2EDuration="3.701504807s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:46.681128187 +0000 UTC m=+1308.155272531" watchObservedRunningTime="2026-03-08 00:44:46.701504807 +0000 UTC m=+1308.175649151" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.194362 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255578 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255620 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhfj5\" (UniqueName: \"kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255815 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255839 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255856 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.255938 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config\") pod \"28304045-4be1-47ec-99d2-f77171370750\" (UID: \"28304045-4be1-47ec-99d2-f77171370750\") " Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.280049 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5" (OuterVolumeSpecName: "kube-api-access-fhfj5") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "kube-api-access-fhfj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.308784 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.326266 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c63252b-7c30-4197-b34a-9870713af320" path="/var/lib/kubelet/pods/2c63252b-7c30-4197-b34a-9870713af320/volumes" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.327064 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d12ba102-382c-4b30-a18e-8b94e7879453" path="/var/lib/kubelet/pods/d12ba102-382c-4b30-a18e-8b94e7879453/volumes" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.328263 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.348280 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config" (OuterVolumeSpecName: "config") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.352221 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.354692 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28304045-4be1-47ec-99d2-f77171370750" (UID: "28304045-4be1-47ec-99d2-f77171370750"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.359728 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.360020 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.360048 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhfj5\" (UniqueName: \"kubernetes.io/projected/28304045-4be1-47ec-99d2-f77171370750-kube-api-access-fhfj5\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.360060 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.360070 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.360079 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28304045-4be1-47ec-99d2-f77171370750-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.715689 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" event={"ID":"ab202f58-df7d-49ee-bf13-116fee0dc87c","Type":"ContainerStarted","Data":"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd"} Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.715801 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.721634 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.721643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-fdrt4" event={"ID":"28304045-4be1-47ec-99d2-f77171370750","Type":"ContainerDied","Data":"00688f44d7e30f6f69408c29e03d6071e01209cd6d9fc6fbc6ec99af8392e439"} Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.721689 4762 scope.go:117] "RemoveContainer" containerID="7816c9bba222ef1c45653f6c51f237f7e59a89f619327a8db5c3bc2d7ed32b26" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.748136 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" podStartSLOduration=3.7481093210000003 podStartE2EDuration="3.748109321s" podCreationTimestamp="2026-03-08 00:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:44:47.747000368 +0000 UTC m=+1309.221144742" watchObservedRunningTime="2026-03-08 00:44:47.748109321 +0000 UTC m=+1309.222253705" Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.792435 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.800162 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-fdrt4"] Mar 08 00:44:47 crc kubenswrapper[4762]: I0308 00:44:47.910567 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:44:49 crc kubenswrapper[4762]: I0308 00:44:49.279870 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28304045-4be1-47ec-99d2-f77171370750" path="/var/lib/kubelet/pods/28304045-4be1-47ec-99d2-f77171370750/volumes" Mar 08 00:44:49 crc kubenswrapper[4762]: I0308 00:44:49.752056 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b8c1e48-ff0b-44f1-9b7b-21aec7963747" containerID="9981db043aa2dde646fea769431ba4acc80349a1d84fbadfe09ccb3c34c5620e" exitCode=0 Mar 08 00:44:49 crc kubenswrapper[4762]: I0308 00:44:49.752096 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chxth" event={"ID":"1b8c1e48-ff0b-44f1-9b7b-21aec7963747","Type":"ContainerDied","Data":"9981db043aa2dde646fea769431ba4acc80349a1d84fbadfe09ccb3c34c5620e"} Mar 08 00:44:50 crc kubenswrapper[4762]: I0308 00:44:50.383525 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 08 00:44:54 crc kubenswrapper[4762]: I0308 00:44:54.681958 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:44:54 crc kubenswrapper[4762]: I0308 00:44:54.783685 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:44:54 crc kubenswrapper[4762]: I0308 00:44:54.783946 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" containerID="cri-o://334753ee077fe567e4a6d0ba6c0b36d20002710037bad1964eb68c8614584e20" gracePeriod=10 Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.594952 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761275 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6pvq\" (UniqueName: \"kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761368 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761398 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761426 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761503 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.761539 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys\") pod \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\" (UID: \"1b8c1e48-ff0b-44f1-9b7b-21aec7963747\") " Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.772019 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts" (OuterVolumeSpecName: "scripts") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.772090 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.772110 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.772244 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq" (OuterVolumeSpecName: "kube-api-access-m6pvq") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "kube-api-access-m6pvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.794862 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data" (OuterVolumeSpecName: "config-data") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.798888 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b8c1e48-ff0b-44f1-9b7b-21aec7963747" (UID: "1b8c1e48-ff0b-44f1-9b7b-21aec7963747"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.851213 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-chxth" event={"ID":"1b8c1e48-ff0b-44f1-9b7b-21aec7963747","Type":"ContainerDied","Data":"122116420cad226a70354e80c859567d0fc6a36312d9609ef0d03eac37a7b6ad"} Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.851273 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122116420cad226a70354e80c859567d0fc6a36312d9609ef0d03eac37a7b6ad" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.851237 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-chxth" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.854001 4762 generic.go:334] "Generic (PLEG): container finished" podID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerID="334753ee077fe567e4a6d0ba6c0b36d20002710037bad1964eb68c8614584e20" exitCode=0 Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.854046 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" event={"ID":"2d8bdde6-0986-49af-98a6-9879bd12953c","Type":"ContainerDied","Data":"334753ee077fe567e4a6d0ba6c0b36d20002710037bad1964eb68c8614584e20"} Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.863810 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.863938 4762 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.864002 4762 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.864061 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6pvq\" (UniqueName: \"kubernetes.io/projected/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-kube-api-access-m6pvq\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.864119 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:55 crc kubenswrapper[4762]: I0308 00:44:55.864175 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8c1e48-ff0b-44f1-9b7b-21aec7963747-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.688331 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-chxth"] Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.700346 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-chxth"] Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776137 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hjftj"] Mar 08 00:44:56 crc kubenswrapper[4762]: E0308 00:44:56.776511 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c63252b-7c30-4197-b34a-9870713af320" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776524 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c63252b-7c30-4197-b34a-9870713af320" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: E0308 00:44:56.776544 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d92be0-6a23-4bc5-93b5-342f087356be" containerName="mariadb-account-create-update" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776551 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d92be0-6a23-4bc5-93b5-342f087356be" containerName="mariadb-account-create-update" Mar 08 00:44:56 crc kubenswrapper[4762]: E0308 00:44:56.776563 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b8c1e48-ff0b-44f1-9b7b-21aec7963747" containerName="keystone-bootstrap" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776570 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b8c1e48-ff0b-44f1-9b7b-21aec7963747" containerName="keystone-bootstrap" Mar 08 00:44:56 crc kubenswrapper[4762]: E0308 00:44:56.776583 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28304045-4be1-47ec-99d2-f77171370750" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776592 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="28304045-4be1-47ec-99d2-f77171370750" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: E0308 00:44:56.776609 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d12ba102-382c-4b30-a18e-8b94e7879453" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776619 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d12ba102-382c-4b30-a18e-8b94e7879453" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776915 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d12ba102-382c-4b30-a18e-8b94e7879453" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776940 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d92be0-6a23-4bc5-93b5-342f087356be" containerName="mariadb-account-create-update" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776950 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c63252b-7c30-4197-b34a-9870713af320" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776961 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="28304045-4be1-47ec-99d2-f77171370750" containerName="init" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.776975 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b8c1e48-ff0b-44f1-9b7b-21aec7963747" containerName="keystone-bootstrap" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.778753 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.790402 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjftj"] Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.823477 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rwwq6" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.823701 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.823916 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.824044 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.824146 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881067 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881392 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881433 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881511 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881542 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q9bh\" (UniqueName: \"kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.881580 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983483 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q9bh\" (UniqueName: \"kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983610 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983698 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983718 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.983742 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.988259 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:56 crc kubenswrapper[4762]: I0308 00:44:56.989305 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.000511 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.000811 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.000999 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q9bh\" (UniqueName: \"kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.012416 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle\") pod \"keystone-bootstrap-hjftj\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.147033 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:44:57 crc kubenswrapper[4762]: I0308 00:44:57.275890 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b8c1e48-ff0b-44f1-9b7b-21aec7963747" path="/var/lib/kubelet/pods/1b8c1e48-ff0b-44f1-9b7b-21aec7963747/volumes" Mar 08 00:44:58 crc kubenswrapper[4762]: I0308 00:44:58.684191 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.138562 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k"] Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.140410 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.143032 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.143224 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.149058 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k"] Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.246519 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwhkb\" (UniqueName: \"kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.246604 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.246732 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.348889 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.348994 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.349123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwhkb\" (UniqueName: \"kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.350189 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.354479 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.365200 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwhkb\" (UniqueName: \"kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb\") pod \"collect-profiles-29548845-2m84k\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.382941 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.392791 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.492282 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:00 crc kubenswrapper[4762]: I0308 00:45:00.908871 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 08 00:45:04 crc kubenswrapper[4762]: I0308 00:45:04.721300 4762 scope.go:117] "RemoveContainer" containerID="9c2425a248af30a42dfb19406e5edbbc08ad5554d9faa03d93aa124095977485" Mar 08 00:45:06 crc kubenswrapper[4762]: I0308 00:45:06.976614 4762 generic.go:334] "Generic (PLEG): container finished" podID="a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" containerID="131af0bb5a04281b09939b68e91fbaedcdc8412955b11584f37b21d49d4b280b" exitCode=0 Mar 08 00:45:06 crc kubenswrapper[4762]: I0308 00:45:06.976686 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p8wzm" event={"ID":"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997","Type":"ContainerDied","Data":"131af0bb5a04281b09939b68e91fbaedcdc8412955b11584f37b21d49d4b280b"} Mar 08 00:45:08 crc kubenswrapper[4762]: I0308 00:45:08.684073 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Mar 08 00:45:11 crc kubenswrapper[4762]: E0308 00:45:11.194020 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Mar 08 00:45:11 crc kubenswrapper[4762]: E0308 00:45:11.194444 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h565h665h7h56h75h65h5bh547h7fh5c8hbch57bh5b9h565h5c4h9ch58fh56fh65dh5b4h6bhf9h58fh94h5b4h5f6h65dh686h647h64bh566q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6wql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b08eae00-a546-4fa0-bf56-8dbba6c3ffb3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:45:11 crc kubenswrapper[4762]: E0308 00:45:11.473366 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Mar 08 00:45:11 crc kubenswrapper[4762]: E0308 00:45:11.473536 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c6m4g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-z4g9j_openstack(4992e7da-9de7-4354-a35f-a68f8bd0013a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:45:11 crc kubenswrapper[4762]: E0308 00:45:11.474696 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-z4g9j" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.611034 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.701878 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb\") pod \"2d8bdde6-0986-49af-98a6-9879bd12953c\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.701940 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc\") pod \"2d8bdde6-0986-49af-98a6-9879bd12953c\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.702001 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb\") pod \"2d8bdde6-0986-49af-98a6-9879bd12953c\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.702057 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xd8r\" (UniqueName: \"kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r\") pod \"2d8bdde6-0986-49af-98a6-9879bd12953c\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.702113 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config\") pod \"2d8bdde6-0986-49af-98a6-9879bd12953c\" (UID: \"2d8bdde6-0986-49af-98a6-9879bd12953c\") " Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.710204 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r" (OuterVolumeSpecName: "kube-api-access-6xd8r") pod "2d8bdde6-0986-49af-98a6-9879bd12953c" (UID: "2d8bdde6-0986-49af-98a6-9879bd12953c"). InnerVolumeSpecName "kube-api-access-6xd8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.752054 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d8bdde6-0986-49af-98a6-9879bd12953c" (UID: "2d8bdde6-0986-49af-98a6-9879bd12953c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.774125 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config" (OuterVolumeSpecName: "config") pod "2d8bdde6-0986-49af-98a6-9879bd12953c" (UID: "2d8bdde6-0986-49af-98a6-9879bd12953c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.778970 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d8bdde6-0986-49af-98a6-9879bd12953c" (UID: "2d8bdde6-0986-49af-98a6-9879bd12953c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.781284 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d8bdde6-0986-49af-98a6-9879bd12953c" (UID: "2d8bdde6-0986-49af-98a6-9879bd12953c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.804137 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xd8r\" (UniqueName: \"kubernetes.io/projected/2d8bdde6-0986-49af-98a6-9879bd12953c-kube-api-access-6xd8r\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.804171 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.804182 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.804190 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:11 crc kubenswrapper[4762]: I0308 00:45:11.804200 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d8bdde6-0986-49af-98a6-9879bd12953c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.044974 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.056869 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" event={"ID":"2d8bdde6-0986-49af-98a6-9879bd12953c","Type":"ContainerDied","Data":"6a8fc056ac466d7f36bc85463f32cc6e5b4d5c78aeda33785370a1020db1a30a"} Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.056938 4762 scope.go:117] "RemoveContainer" containerID="334753ee077fe567e4a6d0ba6c0b36d20002710037bad1964eb68c8614584e20" Mar 08 00:45:12 crc kubenswrapper[4762]: E0308 00:45:12.059293 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-z4g9j" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.102862 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.112445 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-28hm7"] Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.851587 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:45:12 crc kubenswrapper[4762]: I0308 00:45:12.851673 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:45:12 crc kubenswrapper[4762]: E0308 00:45:12.997368 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Mar 08 00:45:12 crc kubenswrapper[4762]: E0308 00:45:12.998002 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kbcrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-pxw9p_openstack(8511806b-d3fb-48df-8348-33f84645e2a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:45:12 crc kubenswrapper[4762]: E0308 00:45:12.999262 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-pxw9p" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.017048 4762 scope.go:117] "RemoveContainer" containerID="b330aaa1c9064ad182dd0434ffad96a778c74be173d78d369e5fcaca90bb0036" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.085643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-p8wzm" event={"ID":"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997","Type":"ContainerDied","Data":"312b5f0949d73f689c67b2b8b48c7397ee7a8911fd3ddc79cce71c7a2866ef59"} Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.085739 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312b5f0949d73f689c67b2b8b48c7397ee7a8911fd3ddc79cce71c7a2866ef59" Mar 08 00:45:13 crc kubenswrapper[4762]: E0308 00:45:13.101724 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-pxw9p" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.163914 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.290662 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" path="/var/lib/kubelet/pods/2d8bdde6-0986-49af-98a6-9879bd12953c/volumes" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.333934 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config\") pod \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.334053 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle\") pod \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.334141 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q5gj\" (UniqueName: \"kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj\") pod \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\" (UID: \"a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997\") " Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.339959 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj" (OuterVolumeSpecName: "kube-api-access-8q5gj") pod "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" (UID: "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997"). InnerVolumeSpecName "kube-api-access-8q5gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.359666 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config" (OuterVolumeSpecName: "config") pod "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" (UID: "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.360330 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" (UID: "a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.436613 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.436643 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q5gj\" (UniqueName: \"kubernetes.io/projected/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-kube-api-access-8q5gj\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.436655 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.519309 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k"] Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.538038 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hjftj"] Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.684354 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-28hm7" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Mar 08 00:45:13 crc kubenswrapper[4762]: W0308 00:45:13.741087 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15be8fbe_420e_43f5_9797_eba8d83627c5.slice/crio-16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e WatchSource:0}: Error finding container 16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e: Status 404 returned error can't find the container with id 16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e Mar 08 00:45:13 crc kubenswrapper[4762]: W0308 00:45:13.747153 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f60271a_1333_4e8b_9a9d_1be9697bbfb0.slice/crio-63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857 WatchSource:0}: Error finding container 63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857: Status 404 returned error can't find the container with id 63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857 Mar 08 00:45:13 crc kubenswrapper[4762]: I0308 00:45:13.752016 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.106708 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" event={"ID":"15be8fbe-420e-43f5-9797-eba8d83627c5","Type":"ContainerStarted","Data":"682d0c9adc98928d259e839f9442c06d4a66d728497ef16a218fc2ba526b2dc7"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.106795 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" event={"ID":"15be8fbe-420e-43f5-9797-eba8d83627c5","Type":"ContainerStarted","Data":"16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.109482 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-p9m92" event={"ID":"2613b509-c9d0-4a4b-99c0-11c8c9a0e891","Type":"ContainerStarted","Data":"24f10e01b76751b32e70cbf746c17f0d5d1367933b11187b517e9aae78b67857"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.114095 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-w6rs6" event={"ID":"6898c30b-2e0c-4062-b5f2-70aa22bb5139","Type":"ContainerStarted","Data":"921de3fc2026836fc040da75a115f70f29584eeb9e193516fd272a984941f492"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.118589 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjftj" event={"ID":"1f60271a-1333-4e8b-9a9d-1be9697bbfb0","Type":"ContainerStarted","Data":"0e53e667046654eb7478c11e56e61886be357d817704fbeff86e04204e225e55"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.118743 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjftj" event={"ID":"1f60271a-1333-4e8b-9a9d-1be9697bbfb0","Type":"ContainerStarted","Data":"63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.123431 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerStarted","Data":"4dc17e7a09601f3644e9a65377c0d22057e1636e69432383d68d272ca098a214"} Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.123467 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-p8wzm" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.143396 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" podStartSLOduration=14.143380753 podStartE2EDuration="14.143380753s" podCreationTimestamp="2026-03-08 00:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:14.137099812 +0000 UTC m=+1335.611244156" watchObservedRunningTime="2026-03-08 00:45:14.143380753 +0000 UTC m=+1335.617525097" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.161284 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-w6rs6" podStartSLOduration=3.694491044 podStartE2EDuration="31.161266967s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="2026-03-08 00:44:45.523186707 +0000 UTC m=+1306.997331051" lastFinishedPulling="2026-03-08 00:45:12.98996263 +0000 UTC m=+1334.464106974" observedRunningTime="2026-03-08 00:45:14.156568554 +0000 UTC m=+1335.630712898" watchObservedRunningTime="2026-03-08 00:45:14.161266967 +0000 UTC m=+1335.635411311" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.179710 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-p9m92" podStartSLOduration=3.737648628 podStartE2EDuration="31.179692898s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="2026-03-08 00:44:45.549267791 +0000 UTC m=+1307.023412135" lastFinishedPulling="2026-03-08 00:45:12.991312031 +0000 UTC m=+1334.465456405" observedRunningTime="2026-03-08 00:45:14.172031185 +0000 UTC m=+1335.646175529" watchObservedRunningTime="2026-03-08 00:45:14.179692898 +0000 UTC m=+1335.653837252" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.222590 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hjftj" podStartSLOduration=18.222570572 podStartE2EDuration="18.222570572s" podCreationTimestamp="2026-03-08 00:44:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:14.191036733 +0000 UTC m=+1335.665181077" watchObservedRunningTime="2026-03-08 00:45:14.222570572 +0000 UTC m=+1335.696714916" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.354944 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:14 crc kubenswrapper[4762]: E0308 00:45:14.355300 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" containerName="neutron-db-sync" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.355316 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" containerName="neutron-db-sync" Mar 08 00:45:14 crc kubenswrapper[4762]: E0308 00:45:14.355345 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="init" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.355352 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="init" Mar 08 00:45:14 crc kubenswrapper[4762]: E0308 00:45:14.355366 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.355374 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.355552 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8bdde6-0986-49af-98a6-9879bd12953c" containerName="dnsmasq-dns" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.355570 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" containerName="neutron-db-sync" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.356539 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.380108 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.453580 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.459492 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.459620 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr8hm\" (UniqueName: \"kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.490227 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.490329 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.490398 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.506607 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.533804 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.537366 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.537389 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.537810 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pp5qm" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.538042 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.550005 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594771 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594833 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xr8hm\" (UniqueName: \"kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594852 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594896 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.594970 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.595874 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.596377 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.597185 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.597671 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.603508 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.640588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xr8hm\" (UniqueName: \"kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm\") pod \"dnsmasq-dns-6b7b667979-8mhx4\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.673196 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.710836 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wr9v\" (UniqueName: \"kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.711190 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.711235 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.711271 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.711315 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.812619 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.812693 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.812744 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wr9v\" (UniqueName: \"kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.812805 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.812874 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.821593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.823334 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.823733 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.845524 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.856424 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wr9v\" (UniqueName: \"kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v\") pod \"neutron-5575787c44-s8z4s\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:14 crc kubenswrapper[4762]: I0308 00:45:14.863039 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:15 crc kubenswrapper[4762]: I0308 00:45:15.143870 4762 generic.go:334] "Generic (PLEG): container finished" podID="15be8fbe-420e-43f5-9797-eba8d83627c5" containerID="682d0c9adc98928d259e839f9442c06d4a66d728497ef16a218fc2ba526b2dc7" exitCode=0 Mar 08 00:45:15 crc kubenswrapper[4762]: I0308 00:45:15.144143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" event={"ID":"15be8fbe-420e-43f5-9797-eba8d83627c5","Type":"ContainerDied","Data":"682d0c9adc98928d259e839f9442c06d4a66d728497ef16a218fc2ba526b2dc7"} Mar 08 00:45:15 crc kubenswrapper[4762]: I0308 00:45:15.377569 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:15 crc kubenswrapper[4762]: I0308 00:45:15.470126 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:15 crc kubenswrapper[4762]: W0308 00:45:15.474639 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3876d14f_7657_46c3_90dd_145ba8955ccb.slice/crio-951136d34a90a041aa5c01b7f04db73f8c7dea7f0073dfbf58688e19240d1a64 WatchSource:0}: Error finding container 951136d34a90a041aa5c01b7f04db73f8c7dea7f0073dfbf58688e19240d1a64: Status 404 returned error can't find the container with id 951136d34a90a041aa5c01b7f04db73f8c7dea7f0073dfbf58688e19240d1a64 Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.157857 4762 generic.go:334] "Generic (PLEG): container finished" podID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerID="f8794e05950f02f95d9dd03f609dcf8ba247db4620877716f662859a67bf586b" exitCode=0 Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.158036 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" event={"ID":"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25","Type":"ContainerDied","Data":"f8794e05950f02f95d9dd03f609dcf8ba247db4620877716f662859a67bf586b"} Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.158607 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" event={"ID":"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25","Type":"ContainerStarted","Data":"1bd71a2e6812a40e4d0f06c2f7103804d8cbf9b7568e0031647347fabb4ad7f6"} Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.166726 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerStarted","Data":"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52"} Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.166772 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.167008 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerStarted","Data":"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0"} Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.167028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerStarted","Data":"951136d34a90a041aa5c01b7f04db73f8c7dea7f0073dfbf58688e19240d1a64"} Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.212321 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5575787c44-s8z4s" podStartSLOduration=2.212307033 podStartE2EDuration="2.212307033s" podCreationTimestamp="2026-03-08 00:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:16.208633041 +0000 UTC m=+1337.682777385" watchObservedRunningTime="2026-03-08 00:45:16.212307033 +0000 UTC m=+1337.686451377" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.620516 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.746228 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume\") pod \"15be8fbe-420e-43f5-9797-eba8d83627c5\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.746433 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume\") pod \"15be8fbe-420e-43f5-9797-eba8d83627c5\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.746559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwhkb\" (UniqueName: \"kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb\") pod \"15be8fbe-420e-43f5-9797-eba8d83627c5\" (UID: \"15be8fbe-420e-43f5-9797-eba8d83627c5\") " Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.748158 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume" (OuterVolumeSpecName: "config-volume") pod "15be8fbe-420e-43f5-9797-eba8d83627c5" (UID: "15be8fbe-420e-43f5-9797-eba8d83627c5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.752930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "15be8fbe-420e-43f5-9797-eba8d83627c5" (UID: "15be8fbe-420e-43f5-9797-eba8d83627c5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.754457 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb" (OuterVolumeSpecName: "kube-api-access-cwhkb") pod "15be8fbe-420e-43f5-9797-eba8d83627c5" (UID: "15be8fbe-420e-43f5-9797-eba8d83627c5"). InnerVolumeSpecName "kube-api-access-cwhkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.793452 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:16 crc kubenswrapper[4762]: E0308 00:45:16.793846 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15be8fbe-420e-43f5-9797-eba8d83627c5" containerName="collect-profiles" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.793862 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="15be8fbe-420e-43f5-9797-eba8d83627c5" containerName="collect-profiles" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.794044 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="15be8fbe-420e-43f5-9797-eba8d83627c5" containerName="collect-profiles" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.795147 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.797616 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.797815 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.821186 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.848492 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/15be8fbe-420e-43f5-9797-eba8d83627c5-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.848527 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwhkb\" (UniqueName: \"kubernetes.io/projected/15be8fbe-420e-43f5-9797-eba8d83627c5-kube-api-access-cwhkb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.848536 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15be8fbe-420e-43f5-9797-eba8d83627c5-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950328 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950375 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950447 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950491 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nch6q\" (UniqueName: \"kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950573 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:16 crc kubenswrapper[4762]: I0308 00:45:16.950598 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.052511 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.052553 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.052576 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.053242 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.053296 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nch6q\" (UniqueName: \"kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.053352 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.053382 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.057226 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.058533 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.059806 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.061833 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.062299 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.074219 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nch6q\" (UniqueName: \"kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.074739 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs\") pod \"neutron-cc6ddb745-wdmcn\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.121291 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.185435 4762 generic.go:334] "Generic (PLEG): container finished" podID="2613b509-c9d0-4a4b-99c0-11c8c9a0e891" containerID="24f10e01b76751b32e70cbf746c17f0d5d1367933b11187b517e9aae78b67857" exitCode=0 Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.185540 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-p9m92" event={"ID":"2613b509-c9d0-4a4b-99c0-11c8c9a0e891","Type":"ContainerDied","Data":"24f10e01b76751b32e70cbf746c17f0d5d1367933b11187b517e9aae78b67857"} Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.187730 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" event={"ID":"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25","Type":"ContainerStarted","Data":"9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa"} Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.187817 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.189468 4762 generic.go:334] "Generic (PLEG): container finished" podID="6898c30b-2e0c-4062-b5f2-70aa22bb5139" containerID="921de3fc2026836fc040da75a115f70f29584eeb9e193516fd272a984941f492" exitCode=0 Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.189509 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-w6rs6" event={"ID":"6898c30b-2e0c-4062-b5f2-70aa22bb5139","Type":"ContainerDied","Data":"921de3fc2026836fc040da75a115f70f29584eeb9e193516fd272a984941f492"} Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.191579 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.198063 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k" event={"ID":"15be8fbe-420e-43f5-9797-eba8d83627c5","Type":"ContainerDied","Data":"16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e"} Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.198279 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16592363bd6f7fc1c2366293ebac8dbe64acae770715c9d8d1da26f92c17696e" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.222532 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" podStartSLOduration=3.222515 podStartE2EDuration="3.222515s" podCreationTimestamp="2026-03-08 00:45:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:17.215873618 +0000 UTC m=+1338.690017962" watchObservedRunningTime="2026-03-08 00:45:17.222515 +0000 UTC m=+1338.696659344" Mar 08 00:45:17 crc kubenswrapper[4762]: I0308 00:45:17.728936 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:18 crc kubenswrapper[4762]: I0308 00:45:18.207250 4762 generic.go:334] "Generic (PLEG): container finished" podID="1f60271a-1333-4e8b-9a9d-1be9697bbfb0" containerID="0e53e667046654eb7478c11e56e61886be357d817704fbeff86e04204e225e55" exitCode=0 Mar 08 00:45:18 crc kubenswrapper[4762]: I0308 00:45:18.207472 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjftj" event={"ID":"1f60271a-1333-4e8b-9a9d-1be9697bbfb0","Type":"ContainerDied","Data":"0e53e667046654eb7478c11e56e61886be357d817704fbeff86e04204e225e55"} Mar 08 00:45:20 crc kubenswrapper[4762]: W0308 00:45:20.147398 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab4ef1a_343c_4aeb_8f34_d4a46ae25827.slice/crio-5f0d99b47935a5d55353ef94e87510c5796fd38fe6d0650502a7e2ee7661bbc8 WatchSource:0}: Error finding container 5f0d99b47935a5d55353ef94e87510c5796fd38fe6d0650502a7e2ee7661bbc8: Status 404 returned error can't find the container with id 5f0d99b47935a5d55353ef94e87510c5796fd38fe6d0650502a7e2ee7661bbc8 Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.237190 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerStarted","Data":"5f0d99b47935a5d55353ef94e87510c5796fd38fe6d0650502a7e2ee7661bbc8"} Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.243116 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-w6rs6" event={"ID":"6898c30b-2e0c-4062-b5f2-70aa22bb5139","Type":"ContainerDied","Data":"dd703df2e994e5bcf3195b4e129b52683ff187ed801c5f19cb734356d8cfca96"} Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.243170 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd703df2e994e5bcf3195b4e129b52683ff187ed801c5f19cb734356d8cfca96" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.245341 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hjftj" event={"ID":"1f60271a-1333-4e8b-9a9d-1be9697bbfb0","Type":"ContainerDied","Data":"63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857"} Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.245376 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63bd507d4fda3c3fed266200e33fad47ec768775239c24704a233340e4dfa857" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.252298 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-p9m92" event={"ID":"2613b509-c9d0-4a4b-99c0-11c8c9a0e891","Type":"ContainerDied","Data":"658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce"} Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.252350 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="658d38d95aab83d5e765437440378913db6f2331759ed49b4cf37797b42210ce" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.438370 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-p9m92" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.460797 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.502379 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.523664 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts\") pod \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.523973 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2x2t\" (UniqueName: \"kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t\") pod \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.524015 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data\") pod \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.524110 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle\") pod \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.529537 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts" (OuterVolumeSpecName: "scripts") pod "2613b509-c9d0-4a4b-99c0-11c8c9a0e891" (UID: "2613b509-c9d0-4a4b-99c0-11c8c9a0e891"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.532813 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t" (OuterVolumeSpecName: "kube-api-access-z2x2t") pod "2613b509-c9d0-4a4b-99c0-11c8c9a0e891" (UID: "2613b509-c9d0-4a4b-99c0-11c8c9a0e891"). InnerVolumeSpecName "kube-api-access-z2x2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.558013 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data" (OuterVolumeSpecName: "config-data") pod "2613b509-c9d0-4a4b-99c0-11c8c9a0e891" (UID: "2613b509-c9d0-4a4b-99c0-11c8c9a0e891"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.581817 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2613b509-c9d0-4a4b-99c0-11c8c9a0e891" (UID: "2613b509-c9d0-4a4b-99c0-11c8c9a0e891"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626375 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626474 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqcrq\" (UniqueName: \"kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq\") pod \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626563 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle\") pod \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626604 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626630 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626705 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626776 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q9bh\" (UniqueName: \"kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626800 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys\") pod \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\" (UID: \"1f60271a-1333-4e8b-9a9d-1be9697bbfb0\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626874 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs\") pod \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\" (UID: \"2613b509-c9d0-4a4b-99c0-11c8c9a0e891\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.626929 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data\") pod \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\" (UID: \"6898c30b-2e0c-4062-b5f2-70aa22bb5139\") " Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.627285 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.627300 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2x2t\" (UniqueName: \"kubernetes.io/projected/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-kube-api-access-z2x2t\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.627310 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.627318 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.627977 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs" (OuterVolumeSpecName: "logs") pod "2613b509-c9d0-4a4b-99c0-11c8c9a0e891" (UID: "2613b509-c9d0-4a4b-99c0-11c8c9a0e891"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.629961 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.630947 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.631126 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6898c30b-2e0c-4062-b5f2-70aa22bb5139" (UID: "6898c30b-2e0c-4062-b5f2-70aa22bb5139"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.631448 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq" (OuterVolumeSpecName: "kube-api-access-fqcrq") pod "6898c30b-2e0c-4062-b5f2-70aa22bb5139" (UID: "6898c30b-2e0c-4062-b5f2-70aa22bb5139"). InnerVolumeSpecName "kube-api-access-fqcrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.634447 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts" (OuterVolumeSpecName: "scripts") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.636865 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh" (OuterVolumeSpecName: "kube-api-access-7q9bh") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "kube-api-access-7q9bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.653886 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.654704 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data" (OuterVolumeSpecName: "config-data") pod "1f60271a-1333-4e8b-9a9d-1be9697bbfb0" (UID: "1f60271a-1333-4e8b-9a9d-1be9697bbfb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.664188 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6898c30b-2e0c-4062-b5f2-70aa22bb5139" (UID: "6898c30b-2e0c-4062-b5f2-70aa22bb5139"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729098 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729139 4762 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729150 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729158 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729168 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q9bh\" (UniqueName: \"kubernetes.io/projected/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-kube-api-access-7q9bh\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729177 4762 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729185 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2613b509-c9d0-4a4b-99c0-11c8c9a0e891-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729193 4762 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6898c30b-2e0c-4062-b5f2-70aa22bb5139-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729217 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f60271a-1333-4e8b-9a9d-1be9697bbfb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:20 crc kubenswrapper[4762]: I0308 00:45:20.729225 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqcrq\" (UniqueName: \"kubernetes.io/projected/6898c30b-2e0c-4062-b5f2-70aa22bb5139-kube-api-access-fqcrq\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.268681 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-p9m92" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.269572 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-w6rs6" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.270162 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hjftj" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.287000 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.287035 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerStarted","Data":"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b"} Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.287074 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerStarted","Data":"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74"} Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.287086 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerStarted","Data":"2db18f9713756625f75a5a7df0515ce7f74e0e08b3d01c5f438c3d81945f68e1"} Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.296246 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-cc6ddb745-wdmcn" podStartSLOduration=5.296232769 podStartE2EDuration="5.296232769s" podCreationTimestamp="2026-03-08 00:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:21.293352951 +0000 UTC m=+1342.767497325" watchObservedRunningTime="2026-03-08 00:45:21.296232769 +0000 UTC m=+1342.770377113" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.596118 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:45:21 crc kubenswrapper[4762]: E0308 00:45:21.596636 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f60271a-1333-4e8b-9a9d-1be9697bbfb0" containerName="keystone-bootstrap" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.596668 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f60271a-1333-4e8b-9a9d-1be9697bbfb0" containerName="keystone-bootstrap" Mar 08 00:45:21 crc kubenswrapper[4762]: E0308 00:45:21.596688 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6898c30b-2e0c-4062-b5f2-70aa22bb5139" containerName="barbican-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.596696 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6898c30b-2e0c-4062-b5f2-70aa22bb5139" containerName="barbican-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: E0308 00:45:21.596740 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2613b509-c9d0-4a4b-99c0-11c8c9a0e891" containerName="placement-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.596870 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2613b509-c9d0-4a4b-99c0-11c8c9a0e891" containerName="placement-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.597127 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2613b509-c9d0-4a4b-99c0-11c8c9a0e891" containerName="placement-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.597151 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f60271a-1333-4e8b-9a9d-1be9697bbfb0" containerName="keystone-bootstrap" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.597170 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6898c30b-2e0c-4062-b5f2-70aa22bb5139" containerName="barbican-db-sync" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.598434 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.605725 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.605732 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.606023 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-2lct7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.606317 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.606419 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.644443 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.725937 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b9c87cdf8-vw485"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.736944 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.739264 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b9c87cdf8-vw485"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.742898 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.742931 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-rwwq6" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.743197 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.743312 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.743403 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.743417 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747457 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747501 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747534 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747564 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747594 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747618 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6t5\" (UniqueName: \"kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.747672 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.804114 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.805578 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.812276 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-l8dlj" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.812385 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.812468 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.829850 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849161 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-fernet-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849193 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-internal-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849225 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849251 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849282 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849311 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-scripts\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849324 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-config-data\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849338 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849365 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849386 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6t5\" (UniqueName: \"kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw9sn\" (UniqueName: \"kubernetes.io/projected/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-kube-api-access-nw9sn\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849457 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849491 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-combined-ca-bundle\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849512 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-public-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.849527 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-credential-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.851686 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.853522 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.856390 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.859858 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.878908 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.882340 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.882682 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6t5\" (UniqueName: \"kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.883202 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.887028 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.897065 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.898329 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle\") pod \"placement-6d6c959c44-lwsnn\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.954991 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955049 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-combined-ca-bundle\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955109 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-public-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955130 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-credential-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955154 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955180 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955214 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-fernet-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955233 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-internal-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955257 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955282 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955326 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955349 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-scripts\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955367 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-config-data\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955404 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmh9l\" (UniqueName: \"kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955424 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955454 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swl8j\" (UniqueName: \"kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.955480 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw9sn\" (UniqueName: \"kubernetes.io/projected/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-kube-api-access-nw9sn\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.958638 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.958779 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.959025 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="dnsmasq-dns" containerID="cri-o://9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa" gracePeriod=10 Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.964920 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.977963 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-config-data\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.987287 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-public-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:21 crc kubenswrapper[4762]: I0308 00:45:21.988336 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-credential-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.006932 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-internal-tls-certs\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.019116 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-combined-ca-bundle\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.022395 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw9sn\" (UniqueName: \"kubernetes.io/projected/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-kube-api-access-nw9sn\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.038705 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-scripts\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.044328 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4-fernet-keys\") pod \"keystone-b9c87cdf8-vw485\" (UID: \"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4\") " pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.064858 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.064943 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065022 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065058 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065143 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065218 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmh9l\" (UniqueName: \"kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065240 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065562 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swl8j\" (UniqueName: \"kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065703 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.065746 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.066476 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.071721 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.071965 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.075168 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.075468 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.075616 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.076236 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.105581 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.109858 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmh9l\" (UniqueName: \"kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.111490 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.115203 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.118526 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data\") pod \"barbican-keystone-listener-5f7494d5db-9bxk7\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.138616 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swl8j\" (UniqueName: \"kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j\") pod \"barbican-worker-78ddcb99df-zh6nd\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.156212 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.157188 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.209102 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.212845 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.220291 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.245822 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.258577 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6875ccb78-kt8h4"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.260920 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.274900 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.274942 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.274970 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldlrt\" (UniqueName: \"kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275012 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275048 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275071 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv5nz\" (UniqueName: \"kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275095 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275200 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275227 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275325 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.275368 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.300914 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-65899b8d79-mjhtt"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.307733 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.317216 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6875ccb78-kt8h4"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.336691 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65899b8d79-mjhtt"] Mar 08 00:45:22 crc kubenswrapper[4762]: E0308 00:45:22.337152 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd4c63f7_50c6_4e22_a8f0_1d4f207d3b25.slice/crio-9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa.scope\": RecentStats: unable to find data in memory cache]" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.348471 4762 generic.go:334] "Generic (PLEG): container finished" podID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerID="9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa" exitCode=0 Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.348857 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" event={"ID":"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25","Type":"ContainerDied","Data":"9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa"} Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.362262 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377122 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377166 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377199 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-combined-ca-bundle\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377247 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377269 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqvlg\" (UniqueName: \"kubernetes.io/projected/955ae1b9-66fe-47c3-934b-a4372a87e21a-kube-api-access-pqvlg\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377290 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldlrt\" (UniqueName: \"kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377304 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377336 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ae1b9-66fe-47c3-934b-a4372a87e21a-logs\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377356 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377381 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-combined-ca-bundle\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377400 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data-custom\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377453 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv5nz\" (UniqueName: \"kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377470 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4f6\" (UniqueName: \"kubernetes.io/projected/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-kube-api-access-br4f6\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377493 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-logs\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377512 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377547 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377578 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377594 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data-custom\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.377615 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.379212 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.380046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.380609 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.381557 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.384427 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.386084 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.387021 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.389387 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.401995 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldlrt\" (UniqueName: \"kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt\") pod \"dnsmasq-dns-848cf88cfc-lnczd\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.403200 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.409554 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv5nz\" (UniqueName: \"kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz\") pod \"barbican-api-7b58c875bd-447x6\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.421483 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.441290 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.441610 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5575787c44-s8z4s" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-api" containerID="cri-o://751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0" gracePeriod=30 Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.442126 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5575787c44-s8z4s" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" containerID="cri-o://59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52" gracePeriod=30 Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.459185 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5575787c44-s8z4s" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.184:9696/\": EOF" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.459290 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.461299 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479262 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479348 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data-custom\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479444 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-combined-ca-bundle\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479502 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqvlg\" (UniqueName: \"kubernetes.io/projected/955ae1b9-66fe-47c3-934b-a4372a87e21a-kube-api-access-pqvlg\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479532 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479582 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ae1b9-66fe-47c3-934b-a4372a87e21a-logs\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479633 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-combined-ca-bundle\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479658 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data-custom\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479724 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4f6\" (UniqueName: \"kubernetes.io/projected/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-kube-api-access-br4f6\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.479768 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-logs\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.482819 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-logs\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.483299 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/955ae1b9-66fe-47c3-934b-a4372a87e21a-logs\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.518515 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.530391 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-849f745c8c-pjhz2"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.535451 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.569008 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849f745c8c-pjhz2"] Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581150 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwn9g\" (UniqueName: \"kubernetes.io/projected/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-kube-api-access-kwn9g\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581193 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-ovndb-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581255 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-public-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581287 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581590 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581633 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-combined-ca-bundle\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581668 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581799 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwfp\" (UniqueName: \"kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581933 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-internal-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.581992 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-httpd-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.582045 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.582172 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.583839 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.584802 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqvlg\" (UniqueName: \"kubernetes.io/projected/955ae1b9-66fe-47c3-934b-a4372a87e21a-kube-api-access-pqvlg\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.585408 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-combined-ca-bundle\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.589356 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-combined-ca-bundle\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.591116 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-config-data-custom\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.592052 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.609410 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/955ae1b9-66fe-47c3-934b-a4372a87e21a-config-data-custom\") pod \"barbican-worker-65899b8d79-mjhtt\" (UID: \"955ae1b9-66fe-47c3-934b-a4372a87e21a\") " pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.609863 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4f6\" (UniqueName: \"kubernetes.io/projected/9f2c4db5-761b-407b-9e2f-a46ca6bc5675-kube-api-access-br4f6\") pod \"barbican-keystone-listener-6875ccb78-kt8h4\" (UID: \"9f2c4db5-761b-407b-9e2f-a46ca6bc5675\") " pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.616036 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.629663 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-65899b8d79-mjhtt" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.678378 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687215 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687272 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgwfp\" (UniqueName: \"kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687313 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-internal-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687349 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-httpd-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687370 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687464 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwn9g\" (UniqueName: \"kubernetes.io/projected/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-kube-api-access-kwn9g\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687478 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-ovndb-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687526 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-public-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687554 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687579 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.687610 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-combined-ca-bundle\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.760307 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.761737 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgwfp\" (UniqueName: \"kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.762278 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.762611 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.762920 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data\") pod \"barbican-api-fd79d57f6-9kghw\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.765193 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-httpd-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.771017 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-internal-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.771671 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-config\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.772386 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-public-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.773330 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwn9g\" (UniqueName: \"kubernetes.io/projected/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-kube-api-access-kwn9g\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.776940 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-combined-ca-bundle\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.776660 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/aa4ce185-2a5f-4da0-89e0-ad8fb82bd170-ovndb-tls-certs\") pod \"neutron-849f745c8c-pjhz2\" (UID: \"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170\") " pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.946973 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:22 crc kubenswrapper[4762]: I0308 00:45:22.965029 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.060779 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105233 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105286 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105403 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105431 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105501 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xr8hm\" (UniqueName: \"kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.105537 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc\") pod \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\" (UID: \"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25\") " Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.120622 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm" (OuterVolumeSpecName: "kube-api-access-xr8hm") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "kube-api-access-xr8hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.125884 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:45:23 crc kubenswrapper[4762]: W0308 00:45:23.175434 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaefc1b1f_8723_437e_94a6_bc60ccc2f6b6.slice/crio-d768636b3493fec6c7a2554942e51026a41ebd2792bf60df48962385cf616071 WatchSource:0}: Error finding container d768636b3493fec6c7a2554942e51026a41ebd2792bf60df48962385cf616071: Status 404 returned error can't find the container with id d768636b3493fec6c7a2554942e51026a41ebd2792bf60df48962385cf616071 Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.207614 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xr8hm\" (UniqueName: \"kubernetes.io/projected/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-kube-api-access-xr8hm\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.246859 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config" (OuterVolumeSpecName: "config") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.286385 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.306332 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.319424 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.328921 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.330516 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.330608 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.330670 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.385373 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" (UID: "fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.430862 4762 generic.go:334] "Generic (PLEG): container finished" podID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerID="59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52" exitCode=0 Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.430932 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerDied","Data":"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52"} Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.432442 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerStarted","Data":"d768636b3493fec6c7a2554942e51026a41ebd2792bf60df48962385cf616071"} Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.433089 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.434565 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" event={"ID":"fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25","Type":"ContainerDied","Data":"1bd71a2e6812a40e4d0f06c2f7103804d8cbf9b7568e0031647347fabb4ad7f6"} Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.434598 4762 scope.go:117] "RemoveContainer" containerID="9e304366524fa74fe4e341422f1bf2acd5bac979724f5739c8db525cf6134ffa" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.434770 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-8mhx4" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.514640 4762 scope.go:117] "RemoveContainer" containerID="f8794e05950f02f95d9dd03f609dcf8ba247db4620877716f662859a67bf586b" Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.514995 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.545624 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-8mhx4"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.551572 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b9c87cdf8-vw485"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.559792 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.567179 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.577925 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:23 crc kubenswrapper[4762]: I0308 00:45:23.965438 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.031802 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6875ccb78-kt8h4"] Mar 08 00:45:24 crc kubenswrapper[4762]: W0308 00:45:24.080121 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod198d66d2_adcf_4028_9a59_9e396513f44d.slice/crio-a8f8a083cf06b482a32d488af2a550ae6e4fd1e322ee088703033342137641eb WatchSource:0}: Error finding container a8f8a083cf06b482a32d488af2a550ae6e4fd1e322ee088703033342137641eb: Status 404 returned error can't find the container with id a8f8a083cf06b482a32d488af2a550ae6e4fd1e322ee088703033342137641eb Mar 08 00:45:24 crc kubenswrapper[4762]: W0308 00:45:24.094323 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod955ae1b9_66fe_47c3_934b_a4372a87e21a.slice/crio-818a65868ed0c79379077f1e1b8578b47e87168c16fb2a6e84b8e043710d4c58 WatchSource:0}: Error finding container 818a65868ed0c79379077f1e1b8578b47e87168c16fb2a6e84b8e043710d4c58: Status 404 returned error can't find the container with id 818a65868ed0c79379077f1e1b8578b47e87168c16fb2a6e84b8e043710d4c58 Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.103961 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.120426 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-65899b8d79-mjhtt"] Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.132629 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-849f745c8c-pjhz2"] Mar 08 00:45:24 crc kubenswrapper[4762]: W0308 00:45:24.157960 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa4ce185_2a5f_4da0_89e0_ad8fb82bd170.slice/crio-38a02dbd9c443822a66c8177e87be5bd6174650615d0aba876e76d246e4c294e WatchSource:0}: Error finding container 38a02dbd9c443822a66c8177e87be5bd6174650615d0aba876e76d246e4c294e: Status 404 returned error can't find the container with id 38a02dbd9c443822a66c8177e87be5bd6174650615d0aba876e76d246e4c294e Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.472432 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" event={"ID":"845d8906-f82f-418a-88ca-8cd6087fafca","Type":"ContainerStarted","Data":"fac738c3f3aac38a006e3c3b7ec656003c682572fa86046817c256085457aa07"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.475202 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerStarted","Data":"1706175f817afbaba3460a740caead7930868b89cd81a2c8513f0124364d229f"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.487668 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerStarted","Data":"994022efbd7c851568489198582fe3d0ec406eb6bf4eb6f44ab48bae905fbe9e"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.487724 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerStarted","Data":"d0a8be3c5e6f6ce1dbca93979bad329fc90663c6ef835c655c3bed3d3e5fdd66"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.488831 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.526226 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849f745c8c-pjhz2" event={"ID":"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170","Type":"ContainerStarted","Data":"38a02dbd9c443822a66c8177e87be5bd6174650615d0aba876e76d246e4c294e"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.535003 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" event={"ID":"9f2c4db5-761b-407b-9e2f-a46ca6bc5675","Type":"ContainerStarted","Data":"03d3426c4f51c03cf6dc4013ce42c45d7d7853b32073a4919f55bbd3c590f5fd"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.537887 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d6c959c44-lwsnn" podStartSLOduration=3.537867618 podStartE2EDuration="3.537867618s" podCreationTimestamp="2026-03-08 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:24.515159307 +0000 UTC m=+1345.989303651" watchObservedRunningTime="2026-03-08 00:45:24.537867618 +0000 UTC m=+1346.012011962" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.563554 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerStarted","Data":"fdb6c374ce830c5392a3a2a580c78bfe1f9433c7447ab8a8cc635e35a033e6ec"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.567857 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerStarted","Data":"ccfa6439a067c0627ee0fb77a78747dc0217c78e30160a41aeb9ddda81d180ca"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.567907 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerStarted","Data":"b45eec3d2b89e349dbb3ec5577f3a60d7cd3df53ddb6dceeb719510fc6fe200e"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.567919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerStarted","Data":"65ddf0422f3d1b04c25a1962dbf664b5e30f70a004e6225dc238fafa7a505560"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.567969 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.567994 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.600861 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65899b8d79-mjhtt" event={"ID":"955ae1b9-66fe-47c3-934b-a4372a87e21a","Type":"ContainerStarted","Data":"818a65868ed0c79379077f1e1b8578b47e87168c16fb2a6e84b8e043710d4c58"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.618646 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerStarted","Data":"a8f8a083cf06b482a32d488af2a550ae6e4fd1e322ee088703033342137641eb"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.623391 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b9c87cdf8-vw485" event={"ID":"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4","Type":"ContainerStarted","Data":"80d9e0b000488d1646112ed6d2c0008ab670a9316e4f8992a60169614406c2de"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.623924 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b9c87cdf8-vw485" event={"ID":"bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4","Type":"ContainerStarted","Data":"6f8ffc924bdcb7c4aabe4785ce7744b2abc37c23e9ae83e3dace1f63c435c856"} Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.623971 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.643418 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b58c875bd-447x6" podStartSLOduration=2.643396067 podStartE2EDuration="2.643396067s" podCreationTimestamp="2026-03-08 00:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:24.601191083 +0000 UTC m=+1346.075335428" watchObservedRunningTime="2026-03-08 00:45:24.643396067 +0000 UTC m=+1346.117540411" Mar 08 00:45:24 crc kubenswrapper[4762]: I0308 00:45:24.647195 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b9c87cdf8-vw485" podStartSLOduration=3.647179783 podStartE2EDuration="3.647179783s" podCreationTimestamp="2026-03-08 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:24.646084299 +0000 UTC m=+1346.120228663" watchObservedRunningTime="2026-03-08 00:45:24.647179783 +0000 UTC m=+1346.121324127" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.281503 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" path="/var/lib/kubelet/pods/fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25/volumes" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.601216 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.648732 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86c4db5cfd-rtfn2"] Mar 08 00:45:25 crc kubenswrapper[4762]: E0308 00:45:25.649144 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="init" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.649156 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="init" Mar 08 00:45:25 crc kubenswrapper[4762]: E0308 00:45:25.649194 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="dnsmasq-dns" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.649200 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="dnsmasq-dns" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.649403 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd4c63f7-50c6-4e22-a8f0-1d4f207d3b25" containerName="dnsmasq-dns" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.650897 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.652215 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849f745c8c-pjhz2" event={"ID":"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170","Type":"ContainerStarted","Data":"e6f993f0e5ce10590f84b73cc38c5d8c93db7717080a48530dc9b4dfb07dc23a"} Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.652246 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-849f745c8c-pjhz2" event={"ID":"aa4ce185-2a5f-4da0-89e0-ad8fb82bd170","Type":"ContainerStarted","Data":"ca0421e2602733bad0b203cd66e3499dc84534be33292f847fd3f1c5aa6b69da"} Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.652998 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.653447 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.653580 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.660510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerStarted","Data":"7b77df23ed8910c4f0b3c488ab8c35c35c61352f2c472823ed1b74bcb47115f8"} Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.660553 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerStarted","Data":"cc7decc1fab9af910422869646849ec26e4e8db62bea22fee54eb9ba87efb3c1"} Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.660594 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.660810 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.662053 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86c4db5cfd-rtfn2"] Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.667701 4762 generic.go:334] "Generic (PLEG): container finished" podID="845d8906-f82f-418a-88ca-8cd6087fafca" containerID="45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361" exitCode=0 Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.668535 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" event={"ID":"845d8906-f82f-418a-88ca-8cd6087fafca","Type":"ContainerDied","Data":"45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361"} Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.668578 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.725363 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnjp7\" (UniqueName: \"kubernetes.io/projected/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-kube-api-access-fnjp7\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.725967 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.726402 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-internal-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.726493 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-logs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.727500 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-public-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.727665 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data-custom\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.727974 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-combined-ca-bundle\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.751043 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-fd79d57f6-9kghw" podStartSLOduration=3.751028598 podStartE2EDuration="3.751028598s" podCreationTimestamp="2026-03-08 00:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:25.742744876 +0000 UTC m=+1347.216889220" watchObservedRunningTime="2026-03-08 00:45:25.751028598 +0000 UTC m=+1347.225172942" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.771926 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-849f745c8c-pjhz2" podStartSLOduration=3.7719066530000003 podStartE2EDuration="3.771906653s" podCreationTimestamp="2026-03-08 00:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:25.75867055 +0000 UTC m=+1347.232814894" watchObservedRunningTime="2026-03-08 00:45:25.771906653 +0000 UTC m=+1347.246050997" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831049 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnjp7\" (UniqueName: \"kubernetes.io/projected/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-kube-api-access-fnjp7\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831109 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831142 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-internal-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831159 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-logs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831195 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-public-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data-custom\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831256 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-combined-ca-bundle\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.831716 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-logs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.837373 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-internal-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.841278 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-public-tls-certs\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.841794 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-combined-ca-bundle\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.842936 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.848784 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnjp7\" (UniqueName: \"kubernetes.io/projected/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-kube-api-access-fnjp7\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.851639 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b4e0a41-4c23-4e6f-8420-baa2dabdfef6-config-data-custom\") pod \"barbican-api-86c4db5cfd-rtfn2\" (UID: \"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6\") " pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:25 crc kubenswrapper[4762]: I0308 00:45:25.969288 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:26 crc kubenswrapper[4762]: I0308 00:45:26.676606 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b58c875bd-447x6" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api-log" containerID="cri-o://b45eec3d2b89e349dbb3ec5577f3a60d7cd3df53ddb6dceeb719510fc6fe200e" gracePeriod=30 Mar 08 00:45:26 crc kubenswrapper[4762]: I0308 00:45:26.676624 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b58c875bd-447x6" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api" containerID="cri-o://ccfa6439a067c0627ee0fb77a78747dc0217c78e30160a41aeb9ddda81d180ca" gracePeriod=30 Mar 08 00:45:27 crc kubenswrapper[4762]: I0308 00:45:27.688410 4762 generic.go:334] "Generic (PLEG): container finished" podID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerID="ccfa6439a067c0627ee0fb77a78747dc0217c78e30160a41aeb9ddda81d180ca" exitCode=0 Mar 08 00:45:27 crc kubenswrapper[4762]: I0308 00:45:27.688455 4762 generic.go:334] "Generic (PLEG): container finished" podID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerID="b45eec3d2b89e349dbb3ec5577f3a60d7cd3df53ddb6dceeb719510fc6fe200e" exitCode=143 Mar 08 00:45:27 crc kubenswrapper[4762]: I0308 00:45:27.688492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerDied","Data":"ccfa6439a067c0627ee0fb77a78747dc0217c78e30160a41aeb9ddda81d180ca"} Mar 08 00:45:27 crc kubenswrapper[4762]: I0308 00:45:27.688550 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerDied","Data":"b45eec3d2b89e349dbb3ec5577f3a60d7cd3df53ddb6dceeb719510fc6fe200e"} Mar 08 00:45:33 crc kubenswrapper[4762]: I0308 00:45:33.755921 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" event={"ID":"845d8906-f82f-418a-88ca-8cd6087fafca","Type":"ContainerStarted","Data":"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc"} Mar 08 00:45:33 crc kubenswrapper[4762]: I0308 00:45:33.757151 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:33 crc kubenswrapper[4762]: I0308 00:45:33.773488 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" podStartSLOduration=12.773473753 podStartE2EDuration="12.773473753s" podCreationTimestamp="2026-03-08 00:45:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:33.772015239 +0000 UTC m=+1355.246159593" watchObservedRunningTime="2026-03-08 00:45:33.773473753 +0000 UTC m=+1355.247618097" Mar 08 00:45:34 crc kubenswrapper[4762]: I0308 00:45:34.357159 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:34 crc kubenswrapper[4762]: I0308 00:45:34.376609 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.362613 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.472600 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle\") pod \"bb4ec81f-27c0-46fd-9959-792c057d62f7\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.472736 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom\") pod \"bb4ec81f-27c0-46fd-9959-792c057d62f7\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.472842 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pv5nz\" (UniqueName: \"kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz\") pod \"bb4ec81f-27c0-46fd-9959-792c057d62f7\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.472932 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data\") pod \"bb4ec81f-27c0-46fd-9959-792c057d62f7\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.472998 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs\") pod \"bb4ec81f-27c0-46fd-9959-792c057d62f7\" (UID: \"bb4ec81f-27c0-46fd-9959-792c057d62f7\") " Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.473901 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs" (OuterVolumeSpecName: "logs") pod "bb4ec81f-27c0-46fd-9959-792c057d62f7" (UID: "bb4ec81f-27c0-46fd-9959-792c057d62f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.478277 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb4ec81f-27c0-46fd-9959-792c057d62f7" (UID: "bb4ec81f-27c0-46fd-9959-792c057d62f7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.478895 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz" (OuterVolumeSpecName: "kube-api-access-pv5nz") pod "bb4ec81f-27c0-46fd-9959-792c057d62f7" (UID: "bb4ec81f-27c0-46fd-9959-792c057d62f7"). InnerVolumeSpecName "kube-api-access-pv5nz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.501879 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb4ec81f-27c0-46fd-9959-792c057d62f7" (UID: "bb4ec81f-27c0-46fd-9959-792c057d62f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.543408 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data" (OuterVolumeSpecName: "config-data") pod "bb4ec81f-27c0-46fd-9959-792c057d62f7" (UID: "bb4ec81f-27c0-46fd-9959-792c057d62f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.575357 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pv5nz\" (UniqueName: \"kubernetes.io/projected/bb4ec81f-27c0-46fd-9959-792c057d62f7-kube-api-access-pv5nz\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.575584 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.575670 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb4ec81f-27c0-46fd-9959-792c057d62f7-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.575748 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.575836 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb4ec81f-27c0-46fd-9959-792c057d62f7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.775271 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b58c875bd-447x6" event={"ID":"bb4ec81f-27c0-46fd-9959-792c057d62f7","Type":"ContainerDied","Data":"65ddf0422f3d1b04c25a1962dbf664b5e30f70a004e6225dc238fafa7a505560"} Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.775329 4762 scope.go:117] "RemoveContainer" containerID="ccfa6439a067c0627ee0fb77a78747dc0217c78e30160a41aeb9ddda81d180ca" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.775343 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b58c875bd-447x6" Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.810016 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:35 crc kubenswrapper[4762]: I0308 00:45:35.818479 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b58c875bd-447x6"] Mar 08 00:45:36 crc kubenswrapper[4762]: E0308 00:45:36.510124 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Mar 08 00:45:36 crc kubenswrapper[4762]: E0308 00:45:36.510443 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6wql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b08eae00-a546-4fa0-bf56-8dbba6c3ffb3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 08 00:45:36 crc kubenswrapper[4762]: E0308 00:45:36.512060 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" Mar 08 00:45:36 crc kubenswrapper[4762]: I0308 00:45:36.554436 4762 scope.go:117] "RemoveContainer" containerID="b45eec3d2b89e349dbb3ec5577f3a60d7cd3df53ddb6dceeb719510fc6fe200e" Mar 08 00:45:36 crc kubenswrapper[4762]: I0308 00:45:36.788559 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="ceilometer-notification-agent" containerID="cri-o://4dc17e7a09601f3644e9a65377c0d22057e1636e69432383d68d272ca098a214" gracePeriod=30 Mar 08 00:45:36 crc kubenswrapper[4762]: I0308 00:45:36.788846 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="sg-core" containerID="cri-o://2db18f9713756625f75a5a7df0515ce7f74e0e08b3d01c5f438c3d81945f68e1" gracePeriod=30 Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:36.999892 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86c4db5cfd-rtfn2"] Mar 08 00:45:37 crc kubenswrapper[4762]: W0308 00:45:37.010380 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b4e0a41_4c23_4e6f_8420_baa2dabdfef6.slice/crio-33e03ad786131bde0949c3546342bb35e7fc2cb0b609d6fc4299881eb7b2e66d WatchSource:0}: Error finding container 33e03ad786131bde0949c3546342bb35e7fc2cb0b609d6fc4299881eb7b2e66d: Status 404 returned error can't find the container with id 33e03ad786131bde0949c3546342bb35e7fc2cb0b609d6fc4299881eb7b2e66d Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.285908 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" path="/var/lib/kubelet/pods/bb4ec81f-27c0-46fd-9959-792c057d62f7/volumes" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.422858 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b58c875bd-447x6" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.191:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.422884 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b58c875bd-447x6" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.191:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.682946 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.745522 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.745988 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="dnsmasq-dns" containerID="cri-o://f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd" gracePeriod=10 Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.819912 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerStarted","Data":"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.820101 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerStarted","Data":"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.839723 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerStarted","Data":"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.839813 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerStarted","Data":"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.850281 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78ddcb99df-zh6nd" podStartSLOduration=3.896697863 podStartE2EDuration="16.850264795s" podCreationTimestamp="2026-03-08 00:45:21 +0000 UTC" firstStartedPulling="2026-03-08 00:45:23.598716102 +0000 UTC m=+1345.072860446" lastFinishedPulling="2026-03-08 00:45:36.552283034 +0000 UTC m=+1358.026427378" observedRunningTime="2026-03-08 00:45:37.848027437 +0000 UTC m=+1359.322171781" watchObservedRunningTime="2026-03-08 00:45:37.850264795 +0000 UTC m=+1359.324409139" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.866929 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z4g9j" event={"ID":"4992e7da-9de7-4354-a35f-a68f8bd0013a","Type":"ContainerStarted","Data":"0a105b6f068febd894a6f32603b932912433f734261860740743fce511f2f984"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.873479 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" podStartSLOduration=3.957326096 podStartE2EDuration="16.873459259s" podCreationTimestamp="2026-03-08 00:45:21 +0000 UTC" firstStartedPulling="2026-03-08 00:45:23.636292665 +0000 UTC m=+1345.110437009" lastFinishedPulling="2026-03-08 00:45:36.552425788 +0000 UTC m=+1358.026570172" observedRunningTime="2026-03-08 00:45:37.871240582 +0000 UTC m=+1359.345384936" watchObservedRunningTime="2026-03-08 00:45:37.873459259 +0000 UTC m=+1359.347603603" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.889910 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pxw9p" event={"ID":"8511806b-d3fb-48df-8348-33f84645e2a3","Type":"ContainerStarted","Data":"ebe30539e9d64aa6b38e21717cada48cf84ff4cbd77750f5e048bfde986c5252"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.908524 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" event={"ID":"9f2c4db5-761b-407b-9e2f-a46ca6bc5675","Type":"ContainerStarted","Data":"dcb4f000406a6a9879e4ef13b3b4f4fb5850a2bfcda9d9a1fc3bf8fd6b82ab8a"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.908587 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" event={"ID":"9f2c4db5-761b-407b-9e2f-a46ca6bc5675","Type":"ContainerStarted","Data":"13c91025246010b1b0ed6337cbcfe4512d229efae51b7a7b89bf754034262ae5"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.930962 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-z4g9j" podStartSLOduration=3.401820272 podStartE2EDuration="54.930939628s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="2026-03-08 00:44:45.107310457 +0000 UTC m=+1306.581454801" lastFinishedPulling="2026-03-08 00:45:36.636429803 +0000 UTC m=+1358.110574157" observedRunningTime="2026-03-08 00:45:37.898191913 +0000 UTC m=+1359.372336267" watchObservedRunningTime="2026-03-08 00:45:37.930939628 +0000 UTC m=+1359.405083972" Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.943911 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65899b8d79-mjhtt" event={"ID":"955ae1b9-66fe-47c3-934b-a4372a87e21a","Type":"ContainerStarted","Data":"adb0791fb6e22d4bbc3d6040c29ccd756759848d166a85f340ddf07a022d2648"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.943960 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-65899b8d79-mjhtt" event={"ID":"955ae1b9-66fe-47c3-934b-a4372a87e21a","Type":"ContainerStarted","Data":"3b41bb84a970ac5bd5198edb72a0dd9a4493cb1521b297b82f4f6efd7640f7a8"} Mar 08 00:45:37 crc kubenswrapper[4762]: I0308 00:45:37.965719 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-pxw9p" podStartSLOduration=3.888750853 podStartE2EDuration="54.965700176s" podCreationTimestamp="2026-03-08 00:44:43 +0000 UTC" firstStartedPulling="2026-03-08 00:44:45.558524192 +0000 UTC m=+1307.032668536" lastFinishedPulling="2026-03-08 00:45:36.635473505 +0000 UTC m=+1358.109617859" observedRunningTime="2026-03-08 00:45:37.922065038 +0000 UTC m=+1359.396209382" watchObservedRunningTime="2026-03-08 00:45:37.965700176 +0000 UTC m=+1359.439844520" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.007055 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6875ccb78-kt8h4" podStartSLOduration=3.477935022 podStartE2EDuration="16.007036073s" podCreationTimestamp="2026-03-08 00:45:22 +0000 UTC" firstStartedPulling="2026-03-08 00:45:24.037189909 +0000 UTC m=+1345.511334253" lastFinishedPulling="2026-03-08 00:45:36.56629096 +0000 UTC m=+1358.040435304" observedRunningTime="2026-03-08 00:45:37.942688936 +0000 UTC m=+1359.416833280" watchObservedRunningTime="2026-03-08 00:45:38.007036073 +0000 UTC m=+1359.481180427" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.013480 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4db5cfd-rtfn2" event={"ID":"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6","Type":"ContainerStarted","Data":"8eac33e7de374e5317d1e82103d3bf669cd0492ea0b752e5d04b37515d516253"} Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.013546 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4db5cfd-rtfn2" event={"ID":"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6","Type":"ContainerStarted","Data":"f7a8e1dfc9be3c4199d5a667b146ec55c51f777f621069fd7dcac5417dedde26"} Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.013559 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86c4db5cfd-rtfn2" event={"ID":"5b4e0a41-4c23-4e6f-8420-baa2dabdfef6","Type":"ContainerStarted","Data":"33e03ad786131bde0949c3546342bb35e7fc2cb0b609d6fc4299881eb7b2e66d"} Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.014004 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.014052 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.056626 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.057749 4762 generic.go:334] "Generic (PLEG): container finished" podID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerID="2db18f9713756625f75a5a7df0515ce7f74e0e08b3d01c5f438c3d81945f68e1" exitCode=2 Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.057792 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerDied","Data":"2db18f9713756625f75a5a7df0515ce7f74e0e08b3d01c5f438c3d81945f68e1"} Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.060501 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-65899b8d79-mjhtt" podStartSLOduration=3.599357885 podStartE2EDuration="16.060482529s" podCreationTimestamp="2026-03-08 00:45:22 +0000 UTC" firstStartedPulling="2026-03-08 00:45:24.10396617 +0000 UTC m=+1345.578110514" lastFinishedPulling="2026-03-08 00:45:36.565090824 +0000 UTC m=+1358.039235158" observedRunningTime="2026-03-08 00:45:38.0026788 +0000 UTC m=+1359.476823144" watchObservedRunningTime="2026-03-08 00:45:38.060482529 +0000 UTC m=+1359.534626873" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.110695 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86c4db5cfd-rtfn2" podStartSLOduration=13.110678315 podStartE2EDuration="13.110678315s" podCreationTimestamp="2026-03-08 00:45:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:38.041905993 +0000 UTC m=+1359.516050337" watchObservedRunningTime="2026-03-08 00:45:38.110678315 +0000 UTC m=+1359.584822659" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.111899 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.339416 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475065 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lshgb\" (UniqueName: \"kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475129 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475245 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475278 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475306 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.475388 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config\") pod \"ab202f58-df7d-49ee-bf13-116fee0dc87c\" (UID: \"ab202f58-df7d-49ee-bf13-116fee0dc87c\") " Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.484307 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb" (OuterVolumeSpecName: "kube-api-access-lshgb") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "kube-api-access-lshgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.562717 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.564687 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.578035 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.578067 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lshgb\" (UniqueName: \"kubernetes.io/projected/ab202f58-df7d-49ee-bf13-116fee0dc87c-kube-api-access-lshgb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.578080 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.615576 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.619705 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config" (OuterVolumeSpecName: "config") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.644660 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ab202f58-df7d-49ee-bf13-116fee0dc87c" (UID: "ab202f58-df7d-49ee-bf13-116fee0dc87c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.680218 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.680444 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:38 crc kubenswrapper[4762]: I0308 00:45:38.680522 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab202f58-df7d-49ee-bf13-116fee0dc87c-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.068376 4762 generic.go:334] "Generic (PLEG): container finished" podID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerID="f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd" exitCode=0 Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.068449 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.068485 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" event={"ID":"ab202f58-df7d-49ee-bf13-116fee0dc87c","Type":"ContainerDied","Data":"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd"} Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.069612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-mrfsn" event={"ID":"ab202f58-df7d-49ee-bf13-116fee0dc87c","Type":"ContainerDied","Data":"46e7bcee59d2f0df686c03c0cc13c19110dc0c75ebdd450074903834df172dde"} Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.069648 4762 scope.go:117] "RemoveContainer" containerID="f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.114026 4762 scope.go:117] "RemoveContainer" containerID="8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.119215 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.128628 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-mrfsn"] Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.136831 4762 scope.go:117] "RemoveContainer" containerID="f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd" Mar 08 00:45:39 crc kubenswrapper[4762]: E0308 00:45:39.137447 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd\": container with ID starting with f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd not found: ID does not exist" containerID="f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.137497 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd"} err="failed to get container status \"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd\": rpc error: code = NotFound desc = could not find container \"f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd\": container with ID starting with f84bfb5fc28d8710f0f0e31fe26c83363a6285a8668695dc034a7091af24e8dd not found: ID does not exist" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.137528 4762 scope.go:117] "RemoveContainer" containerID="8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6" Mar 08 00:45:39 crc kubenswrapper[4762]: E0308 00:45:39.137907 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6\": container with ID starting with 8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6 not found: ID does not exist" containerID="8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.137935 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6"} err="failed to get container status \"8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6\": rpc error: code = NotFound desc = could not find container \"8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6\": container with ID starting with 8a02f5a06a6383b02e748eca50e74acb673f7c11758474234ca2b0729fd33df6 not found: ID does not exist" Mar 08 00:45:39 crc kubenswrapper[4762]: I0308 00:45:39.281699 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" path="/var/lib/kubelet/pods/ab202f58-df7d-49ee-bf13-116fee0dc87c/volumes" Mar 08 00:45:40 crc kubenswrapper[4762]: I0308 00:45:40.085419 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener-log" containerID="cri-o://08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4" gracePeriod=30 Mar 08 00:45:40 crc kubenswrapper[4762]: I0308 00:45:40.085497 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener" containerID="cri-o://efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9" gracePeriod=30 Mar 08 00:45:40 crc kubenswrapper[4762]: I0308 00:45:40.085646 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78ddcb99df-zh6nd" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker-log" containerID="cri-o://a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" gracePeriod=30 Mar 08 00:45:40 crc kubenswrapper[4762]: I0308 00:45:40.085690 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78ddcb99df-zh6nd" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker" containerID="cri-o://82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" gracePeriod=30 Mar 08 00:45:40 crc kubenswrapper[4762]: I0308 00:45:40.934508 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.030773 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle\") pod \"e5d9b443-c848-4a33-a659-a241b3b19cbf\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.031283 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swl8j\" (UniqueName: \"kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j\") pod \"e5d9b443-c848-4a33-a659-a241b3b19cbf\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.031321 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs\") pod \"e5d9b443-c848-4a33-a659-a241b3b19cbf\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.031367 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data\") pod \"e5d9b443-c848-4a33-a659-a241b3b19cbf\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.031390 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom\") pod \"e5d9b443-c848-4a33-a659-a241b3b19cbf\" (UID: \"e5d9b443-c848-4a33-a659-a241b3b19cbf\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.031931 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs" (OuterVolumeSpecName: "logs") pod "e5d9b443-c848-4a33-a659-a241b3b19cbf" (UID: "e5d9b443-c848-4a33-a659-a241b3b19cbf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.032031 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5d9b443-c848-4a33-a659-a241b3b19cbf-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.037267 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5d9b443-c848-4a33-a659-a241b3b19cbf" (UID: "e5d9b443-c848-4a33-a659-a241b3b19cbf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.039944 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j" (OuterVolumeSpecName: "kube-api-access-swl8j") pod "e5d9b443-c848-4a33-a659-a241b3b19cbf" (UID: "e5d9b443-c848-4a33-a659-a241b3b19cbf"). InnerVolumeSpecName "kube-api-access-swl8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.058438 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5d9b443-c848-4a33-a659-a241b3b19cbf" (UID: "e5d9b443-c848-4a33-a659-a241b3b19cbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.093553 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data" (OuterVolumeSpecName: "config-data") pod "e5d9b443-c848-4a33-a659-a241b3b19cbf" (UID: "e5d9b443-c848-4a33-a659-a241b3b19cbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111398 4762 generic.go:334] "Generic (PLEG): container finished" podID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerID="82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" exitCode=0 Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111441 4762 generic.go:334] "Generic (PLEG): container finished" podID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerID="a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" exitCode=143 Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111549 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerDied","Data":"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac"} Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111593 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerDied","Data":"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc"} Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111616 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78ddcb99df-zh6nd" event={"ID":"e5d9b443-c848-4a33-a659-a241b3b19cbf","Type":"ContainerDied","Data":"fdb6c374ce830c5392a3a2a580c78bfe1f9433c7447ab8a8cc635e35a033e6ec"} Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.111655 4762 scope.go:117] "RemoveContainer" containerID="82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.112930 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78ddcb99df-zh6nd" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.114950 4762 generic.go:334] "Generic (PLEG): container finished" podID="faf8ba27-7964-4650-9316-aabba252ed71" containerID="08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4" exitCode=143 Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.115255 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerDied","Data":"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4"} Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.118082 4762 generic.go:334] "Generic (PLEG): container finished" podID="4992e7da-9de7-4354-a35f-a68f8bd0013a" containerID="0a105b6f068febd894a6f32603b932912433f734261860740743fce511f2f984" exitCode=0 Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.118165 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z4g9j" event={"ID":"4992e7da-9de7-4354-a35f-a68f8bd0013a","Type":"ContainerDied","Data":"0a105b6f068febd894a6f32603b932912433f734261860740743fce511f2f984"} Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.136931 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swl8j\" (UniqueName: \"kubernetes.io/projected/e5d9b443-c848-4a33-a659-a241b3b19cbf-kube-api-access-swl8j\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.136973 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.136988 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.137000 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5d9b443-c848-4a33-a659-a241b3b19cbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.141359 4762 scope.go:117] "RemoveContainer" containerID="a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.154594 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.162448 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-78ddcb99df-zh6nd"] Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.177947 4762 scope.go:117] "RemoveContainer" containerID="82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" Mar 08 00:45:41 crc kubenswrapper[4762]: E0308 00:45:41.178316 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac\": container with ID starting with 82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac not found: ID does not exist" containerID="82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.178353 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac"} err="failed to get container status \"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac\": rpc error: code = NotFound desc = could not find container \"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac\": container with ID starting with 82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac not found: ID does not exist" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.178381 4762 scope.go:117] "RemoveContainer" containerID="a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" Mar 08 00:45:41 crc kubenswrapper[4762]: E0308 00:45:41.178675 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc\": container with ID starting with a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc not found: ID does not exist" containerID="a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.178729 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc"} err="failed to get container status \"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc\": rpc error: code = NotFound desc = could not find container \"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc\": container with ID starting with a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc not found: ID does not exist" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.178753 4762 scope.go:117] "RemoveContainer" containerID="82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.179056 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac"} err="failed to get container status \"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac\": rpc error: code = NotFound desc = could not find container \"82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac\": container with ID starting with 82137240d5ad5f094ab37ef014bdcdf1e7638bb71668eb78a6ebad84c67282ac not found: ID does not exist" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.179081 4762 scope.go:117] "RemoveContainer" containerID="a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.179437 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc"} err="failed to get container status \"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc\": rpc error: code = NotFound desc = could not find container \"a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc\": container with ID starting with a1670e85b1521fa568685d7011d3d563c1aa1597e643b9d08f9b03e1609ca0fc not found: ID does not exist" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.277481 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" path="/var/lib/kubelet/pods/e5d9b443-c848-4a33-a659-a241b3b19cbf/volumes" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.724108 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.851050 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom\") pod \"faf8ba27-7964-4650-9316-aabba252ed71\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.851118 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmh9l\" (UniqueName: \"kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l\") pod \"faf8ba27-7964-4650-9316-aabba252ed71\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.851279 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data\") pod \"faf8ba27-7964-4650-9316-aabba252ed71\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.851390 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs\") pod \"faf8ba27-7964-4650-9316-aabba252ed71\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.851491 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle\") pod \"faf8ba27-7964-4650-9316-aabba252ed71\" (UID: \"faf8ba27-7964-4650-9316-aabba252ed71\") " Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.852549 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs" (OuterVolumeSpecName: "logs") pod "faf8ba27-7964-4650-9316-aabba252ed71" (UID: "faf8ba27-7964-4650-9316-aabba252ed71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.856853 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l" (OuterVolumeSpecName: "kube-api-access-tmh9l") pod "faf8ba27-7964-4650-9316-aabba252ed71" (UID: "faf8ba27-7964-4650-9316-aabba252ed71"). InnerVolumeSpecName "kube-api-access-tmh9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.858708 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "faf8ba27-7964-4650-9316-aabba252ed71" (UID: "faf8ba27-7964-4650-9316-aabba252ed71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.899941 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "faf8ba27-7964-4650-9316-aabba252ed71" (UID: "faf8ba27-7964-4650-9316-aabba252ed71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.913640 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data" (OuterVolumeSpecName: "config-data") pod "faf8ba27-7964-4650-9316-aabba252ed71" (UID: "faf8ba27-7964-4650-9316-aabba252ed71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.960558 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/faf8ba27-7964-4650-9316-aabba252ed71-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.960886 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.960973 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.961109 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmh9l\" (UniqueName: \"kubernetes.io/projected/faf8ba27-7964-4650-9316-aabba252ed71-kube-api-access-tmh9l\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:41 crc kubenswrapper[4762]: I0308 00:45:41.961190 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/faf8ba27-7964-4650-9316-aabba252ed71-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.132360 4762 generic.go:334] "Generic (PLEG): container finished" podID="faf8ba27-7964-4650-9316-aabba252ed71" containerID="efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9" exitCode=0 Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.132435 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.132444 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerDied","Data":"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9"} Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.132667 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7494d5db-9bxk7" event={"ID":"faf8ba27-7964-4650-9316-aabba252ed71","Type":"ContainerDied","Data":"1706175f817afbaba3460a740caead7930868b89cd81a2c8513f0124364d229f"} Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.132687 4762 scope.go:117] "RemoveContainer" containerID="efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.135706 4762 generic.go:334] "Generic (PLEG): container finished" podID="8511806b-d3fb-48df-8348-33f84645e2a3" containerID="ebe30539e9d64aa6b38e21717cada48cf84ff4cbd77750f5e048bfde986c5252" exitCode=0 Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.135836 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pxw9p" event={"ID":"8511806b-d3fb-48df-8348-33f84645e2a3","Type":"ContainerDied","Data":"ebe30539e9d64aa6b38e21717cada48cf84ff4cbd77750f5e048bfde986c5252"} Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.142932 4762 generic.go:334] "Generic (PLEG): container finished" podID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerID="4dc17e7a09601f3644e9a65377c0d22057e1636e69432383d68d272ca098a214" exitCode=0 Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.143038 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerDied","Data":"4dc17e7a09601f3644e9a65377c0d22057e1636e69432383d68d272ca098a214"} Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.163413 4762 scope.go:117] "RemoveContainer" containerID="08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.197821 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.208581 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5f7494d5db-9bxk7"] Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.220909 4762 scope.go:117] "RemoveContainer" containerID="efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9" Mar 08 00:45:42 crc kubenswrapper[4762]: E0308 00:45:42.222106 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9\": container with ID starting with efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9 not found: ID does not exist" containerID="efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.222162 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9"} err="failed to get container status \"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9\": rpc error: code = NotFound desc = could not find container \"efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9\": container with ID starting with efd97cbb19b603f410455877fb041056907d0750075c7337e9ca5f19e93dedc9 not found: ID does not exist" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.222197 4762 scope.go:117] "RemoveContainer" containerID="08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4" Mar 08 00:45:42 crc kubenswrapper[4762]: E0308 00:45:42.222660 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4\": container with ID starting with 08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4 not found: ID does not exist" containerID="08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.222694 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4"} err="failed to get container status \"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4\": rpc error: code = NotFound desc = could not find container \"08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4\": container with ID starting with 08f24aafe4c5d6384e038d17ec68599e194630b89d281ac850e5342498795cb4 not found: ID does not exist" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.700132 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.710470 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z4g9j" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774373 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6wql\" (UniqueName: \"kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774424 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774469 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774503 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data\") pod \"4992e7da-9de7-4354-a35f-a68f8bd0013a\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774940 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.774983 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775122 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775284 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775553 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775587 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6m4g\" (UniqueName: \"kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g\") pod \"4992e7da-9de7-4354-a35f-a68f8bd0013a\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775650 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle\") pod \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\" (UID: \"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.775695 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle\") pod \"4992e7da-9de7-4354-a35f-a68f8bd0013a\" (UID: \"4992e7da-9de7-4354-a35f-a68f8bd0013a\") " Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.776203 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.776224 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.779701 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g" (OuterVolumeSpecName: "kube-api-access-c6m4g") pod "4992e7da-9de7-4354-a35f-a68f8bd0013a" (UID: "4992e7da-9de7-4354-a35f-a68f8bd0013a"). InnerVolumeSpecName "kube-api-access-c6m4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.789033 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql" (OuterVolumeSpecName: "kube-api-access-q6wql") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "kube-api-access-q6wql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.792526 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts" (OuterVolumeSpecName: "scripts") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.810855 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.811365 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data" (OuterVolumeSpecName: "config-data") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.816935 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" (UID: "b08eae00-a546-4fa0-bf56-8dbba6c3ffb3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.817348 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4992e7da-9de7-4354-a35f-a68f8bd0013a" (UID: "4992e7da-9de7-4354-a35f-a68f8bd0013a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.851215 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.851279 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.851325 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.852261 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.852323 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d" gracePeriod=600 Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877427 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877451 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877463 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6m4g\" (UniqueName: \"kubernetes.io/projected/4992e7da-9de7-4354-a35f-a68f8bd0013a-kube-api-access-c6m4g\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877475 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877484 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877495 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6wql\" (UniqueName: \"kubernetes.io/projected/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-kube-api-access-q6wql\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.877517 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.879467 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data" (OuterVolumeSpecName: "config-data") pod "4992e7da-9de7-4354-a35f-a68f8bd0013a" (UID: "4992e7da-9de7-4354-a35f-a68f8bd0013a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:42 crc kubenswrapper[4762]: E0308 00:45:42.928640 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e384d81_de01_4ab9_b10b_2c9c5b45422c.slice/crio-94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e384d81_de01_4ab9_b10b_2c9c5b45422c.slice/crio-conmon-94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d.scope\": RecentStats: unable to find data in memory cache]" Mar 08 00:45:42 crc kubenswrapper[4762]: I0308 00:45:42.979668 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4992e7da-9de7-4354-a35f-a68f8bd0013a-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.168409 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d" exitCode=0 Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.168491 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d"} Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.168523 4762 scope.go:117] "RemoveContainer" containerID="fc00848745303e5c66afef8ceef215b964b6d630a4ebb3163157afdcd2292c30" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.171746 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b08eae00-a546-4fa0-bf56-8dbba6c3ffb3","Type":"ContainerDied","Data":"3a604f31656bf300a0bb6b9e39f92934314c59b6401afce47b47133cd151ffb3"} Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.171909 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.176503 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-z4g9j" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.176688 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-z4g9j" event={"ID":"4992e7da-9de7-4354-a35f-a68f8bd0013a","Type":"ContainerDied","Data":"6e26c685d72544e5d4b3f4de972bf580974b5766b12ea43f9bd8ab3ecd042b7a"} Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.176724 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e26c685d72544e5d4b3f4de972bf580974b5766b12ea43f9bd8ab3ecd042b7a" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.196230 4762 scope.go:117] "RemoveContainer" containerID="2db18f9713756625f75a5a7df0515ce7f74e0e08b3d01c5f438c3d81945f68e1" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.289403 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faf8ba27-7964-4650-9316-aabba252ed71" path="/var/lib/kubelet/pods/faf8ba27-7964-4650-9316-aabba252ed71/volumes" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.290073 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.290100 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.300809 4762 scope.go:117] "RemoveContainer" containerID="4dc17e7a09601f3644e9a65377c0d22057e1636e69432383d68d272ca098a214" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.302880 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303313 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303331 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker-log" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303344 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303351 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener-log" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303365 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="dnsmasq-dns" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303371 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="dnsmasq-dns" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303384 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" containerName="heat-db-sync" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303392 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" containerName="heat-db-sync" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303398 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303405 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303421 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="init" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303428 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="init" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303437 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303442 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api-log" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303454 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="sg-core" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303459 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="sg-core" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303473 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303479 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303488 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303493 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener" Mar 08 00:45:43 crc kubenswrapper[4762]: E0308 00:45:43.303501 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="ceilometer-notification-agent" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303507 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="ceilometer-notification-agent" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303681 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303693 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303702 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker-log" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303715 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="faf8ba27-7964-4650-9316-aabba252ed71" containerName="barbican-keystone-listener" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303730 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab202f58-df7d-49ee-bf13-116fee0dc87c" containerName="dnsmasq-dns" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303742 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb4ec81f-27c0-46fd-9959-792c057d62f7" containerName="barbican-api" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303750 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d9b443-c848-4a33-a659-a241b3b19cbf" containerName="barbican-worker" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303851 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="sg-core" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303862 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" containerName="heat-db-sync" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.303874 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" containerName="ceilometer-notification-agent" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.305681 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.308229 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.309428 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.312395 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.388915 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.388991 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.389019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t2l9\" (UniqueName: \"kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.389042 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.389150 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.389178 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.389536 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.491619 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.491929 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492076 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t2l9\" (UniqueName: \"kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492097 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492137 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492157 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.492627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.493299 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.500494 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.501004 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.501469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.501972 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.507176 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t2l9\" (UniqueName: \"kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9\") pod \"ceilometer-0\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.622812 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.727511 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797581 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797686 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbcrw\" (UniqueName: \"kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797741 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797838 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797870 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.797899 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data\") pod \"8511806b-d3fb-48df-8348-33f84645e2a3\" (UID: \"8511806b-d3fb-48df-8348-33f84645e2a3\") " Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.798918 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.803862 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts" (OuterVolumeSpecName: "scripts") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.808640 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.809915 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw" (OuterVolumeSpecName: "kube-api-access-kbcrw") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "kube-api-access-kbcrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.827049 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.868628 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data" (OuterVolumeSpecName: "config-data") pod "8511806b-d3fb-48df-8348-33f84645e2a3" (UID: "8511806b-d3fb-48df-8348-33f84645e2a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900468 4762 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900496 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbcrw\" (UniqueName: \"kubernetes.io/projected/8511806b-d3fb-48df-8348-33f84645e2a3-kube-api-access-kbcrw\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900508 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8511806b-d3fb-48df-8348-33f84645e2a3-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900517 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900525 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:43 crc kubenswrapper[4762]: I0308 00:45:43.900534 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8511806b-d3fb-48df-8348-33f84645e2a3-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:44 crc kubenswrapper[4762]: W0308 00:45:44.080558 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7606b7b7_2804_4c3f_b617_34e50d83c068.slice/crio-5355440d5b4a3c7bee4c0c6be7e41e3952953aca41d8ef3a1149c2aba971054f WatchSource:0}: Error finding container 5355440d5b4a3c7bee4c0c6be7e41e3952953aca41d8ef3a1149c2aba971054f: Status 404 returned error can't find the container with id 5355440d5b4a3c7bee4c0c6be7e41e3952953aca41d8ef3a1149c2aba971054f Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.088579 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.191219 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15"} Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.198052 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-pxw9p" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.198043 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-pxw9p" event={"ID":"8511806b-d3fb-48df-8348-33f84645e2a3","Type":"ContainerDied","Data":"7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878"} Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.198198 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7545744ac9c55218e5e7d435ae53169aca902fa675dc431369ef735e10384878" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.199637 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerStarted","Data":"5355440d5b4a3c7bee4c0c6be7e41e3952953aca41d8ef3a1149c2aba971054f"} Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.506397 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:44 crc kubenswrapper[4762]: E0308 00:45:44.506956 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" containerName="cinder-db-sync" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.506989 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" containerName="cinder-db-sync" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.507210 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" containerName="cinder-db-sync" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.508222 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.513714 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.513744 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.513808 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-fnlb6" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.513865 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.529864 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.546431 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.549020 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.583245 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619680 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619732 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619776 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619799 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619841 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619888 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619915 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v96br\" (UniqueName: \"kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619947 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.619990 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2ns6\" (UniqueName: \"kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.620016 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.620041 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.620093 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.687783 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.690336 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.694592 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.719061 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721544 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2ns6\" (UniqueName: \"kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721597 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721623 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721668 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721716 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721733 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721769 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721784 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721816 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721853 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721874 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v96br\" (UniqueName: \"kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721900 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.721978 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.723005 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.725676 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.727380 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.729305 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.734153 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.736808 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.737733 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.739173 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.741089 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.760347 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2ns6\" (UniqueName: \"kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6\") pod \"cinder-scheduler-0\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.760553 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v96br\" (UniqueName: \"kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br\") pod \"dnsmasq-dns-6578955fd5-fvkkr\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823607 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823662 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823704 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823767 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823794 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823815 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqrww\" (UniqueName: \"kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.823861 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.824549 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.865037 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-5575787c44-s8z4s" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" probeResult="failure" output="Get \"http://10.217.0.184:9696/\": dial tcp 10.217.0.184:9696: connect: connection refused" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.881834 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925732 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925839 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925859 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925895 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925948 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925972 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.925991 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.926003 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqrww\" (UniqueName: \"kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.926872 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.931249 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.931634 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.931718 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.937354 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:44 crc kubenswrapper[4762]: I0308 00:45:44.943645 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqrww\" (UniqueName: \"kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww\") pod \"cinder-api-0\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " pod="openstack/cinder-api-0" Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.020348 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.362730 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b08eae00-a546-4fa0-bf56-8dbba6c3ffb3" path="/var/lib/kubelet/pods/b08eae00-a546-4fa0-bf56-8dbba6c3ffb3/volumes" Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.363909 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerStarted","Data":"c104e693bb0acf095a9e8a0c4f40c2f743215c91460876e32786634352dd9cbc"} Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.409391 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.432513 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:45:45 crc kubenswrapper[4762]: I0308 00:45:45.680148 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:45 crc kubenswrapper[4762]: W0308 00:45:45.683053 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6bd6720_7be6_4632_a01c_6478463ecb5b.slice/crio-2b4157e688a39ee112d2d1cfac8d07b241c2a519198932b42644ea4ed7efadd9 WatchSource:0}: Error finding container 2b4157e688a39ee112d2d1cfac8d07b241c2a519198932b42644ea4ed7efadd9: Status 404 returned error can't find the container with id 2b4157e688a39ee112d2d1cfac8d07b241c2a519198932b42644ea4ed7efadd9 Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.300977 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerStarted","Data":"2b4157e688a39ee112d2d1cfac8d07b241c2a519198932b42644ea4ed7efadd9"} Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.303085 4762 generic.go:334] "Generic (PLEG): container finished" podID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerID="2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e" exitCode=0 Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.303149 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" event={"ID":"d6a1f41f-1dbf-4da0-b725-eb520868478f","Type":"ContainerDied","Data":"2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e"} Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.303179 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" event={"ID":"d6a1f41f-1dbf-4da0-b725-eb520868478f","Type":"ContainerStarted","Data":"1987212fbd008032d7592f08c3f8c83c2dcf3d0b4d081eadf479a94215ea2bb7"} Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.304793 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerStarted","Data":"9374969cd6f9daaab61c357a49d98bb5ef7144890f0f45aac09c55531d54cc54"} Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.308492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerStarted","Data":"01a2601dc3b06c4691de40776acdb16f2022d910d19fd4ad4f22b53dbc22e68d"} Mar 08 00:45:46 crc kubenswrapper[4762]: I0308 00:45:46.868777 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.160797 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.358925 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerStarted","Data":"f8235f2de4c517589f97dfd0d2d3e2e95bb10bc573f45de5d2ed7d251f3b95f1"} Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.365987 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerStarted","Data":"f5c8f3df74edc4e7bdfe58659bdaef728aae86bc6914e4be290b59b411c8b844"} Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.370726 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerStarted","Data":"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a"} Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.383891 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" event={"ID":"d6a1f41f-1dbf-4da0-b725-eb520868478f","Type":"ContainerStarted","Data":"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8"} Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.384172 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.418627 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" podStartSLOduration=3.418606103 podStartE2EDuration="3.418606103s" podCreationTimestamp="2026-03-08 00:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:47.408345781 +0000 UTC m=+1368.882490125" watchObservedRunningTime="2026-03-08 00:45:47.418606103 +0000 UTC m=+1368.892750447" Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.651870 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.789320 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86c4db5cfd-rtfn2" Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.850352 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.850592 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd79d57f6-9kghw" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api-log" containerID="cri-o://cc7decc1fab9af910422869646849ec26e4e8db62bea22fee54eb9ba87efb3c1" gracePeriod=30 Mar 08 00:45:47 crc kubenswrapper[4762]: I0308 00:45:47.850721 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-fd79d57f6-9kghw" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api" containerID="cri-o://7b77df23ed8910c4f0b3c488ab8c35c35c61352f2c472823ed1b74bcb47115f8" gracePeriod=30 Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.396893 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerStarted","Data":"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10"} Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.397299 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.397085 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api" containerID="cri-o://50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" gracePeriod=30 Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.397000 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api-log" containerID="cri-o://6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" gracePeriod=30 Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.399446 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerStarted","Data":"aa2a3e080bf1cc7f9b98e7a45f33c101edc30e2a6959b084d1d0e55b8a9b8d13"} Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.403992 4762 generic.go:334] "Generic (PLEG): container finished" podID="198d66d2-adcf-4028-9a59-9e396513f44d" containerID="cc7decc1fab9af910422869646849ec26e4e8db62bea22fee54eb9ba87efb3c1" exitCode=143 Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.404046 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerDied","Data":"cc7decc1fab9af910422869646849ec26e4e8db62bea22fee54eb9ba87efb3c1"} Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.420397 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.420381611 podStartE2EDuration="4.420381611s" podCreationTimestamp="2026-03-08 00:45:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:48.415512163 +0000 UTC m=+1369.889656507" watchObservedRunningTime="2026-03-08 00:45:48.420381611 +0000 UTC m=+1369.894525955" Mar 08 00:45:48 crc kubenswrapper[4762]: I0308 00:45:48.447972 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.6981480209999997 podStartE2EDuration="4.447956931s" podCreationTimestamp="2026-03-08 00:45:44 +0000 UTC" firstStartedPulling="2026-03-08 00:45:45.439934015 +0000 UTC m=+1366.914078349" lastFinishedPulling="2026-03-08 00:45:46.189742915 +0000 UTC m=+1367.663887259" observedRunningTime="2026-03-08 00:45:48.441930167 +0000 UTC m=+1369.916074511" watchObservedRunningTime="2026-03-08 00:45:48.447956931 +0000 UTC m=+1369.922101275" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.009259 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158452 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158543 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqrww\" (UniqueName: \"kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158578 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158639 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158907 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158909 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.158949 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data\") pod \"d6bd6720-7be6-4632-a01c-6478463ecb5b\" (UID: \"d6bd6720-7be6-4632-a01c-6478463ecb5b\") " Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.159510 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs" (OuterVolumeSpecName: "logs") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.159731 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d6bd6720-7be6-4632-a01c-6478463ecb5b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.159745 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bd6720-7be6-4632-a01c-6478463ecb5b-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.165181 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.167418 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts" (OuterVolumeSpecName: "scripts") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.169644 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww" (OuterVolumeSpecName: "kube-api-access-jqrww") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "kube-api-access-jqrww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.193836 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.216921 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data" (OuterVolumeSpecName: "config-data") pod "d6bd6720-7be6-4632-a01c-6478463ecb5b" (UID: "d6bd6720-7be6-4632-a01c-6478463ecb5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.261391 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.261437 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.261445 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.261454 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6bd6720-7be6-4632-a01c-6478463ecb5b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.261462 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqrww\" (UniqueName: \"kubernetes.io/projected/d6bd6720-7be6-4632-a01c-6478463ecb5b-kube-api-access-jqrww\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421305 4762 generic.go:334] "Generic (PLEG): container finished" podID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerID="50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" exitCode=0 Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421342 4762 generic.go:334] "Generic (PLEG): container finished" podID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerID="6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" exitCode=143 Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421407 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerDied","Data":"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10"} Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421436 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerDied","Data":"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a"} Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421445 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d6bd6720-7be6-4632-a01c-6478463ecb5b","Type":"ContainerDied","Data":"2b4157e688a39ee112d2d1cfac8d07b241c2a519198932b42644ea4ed7efadd9"} Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421460 4762 scope.go:117] "RemoveContainer" containerID="50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.421585 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.427874 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerStarted","Data":"94f3b589ce361ec6a846395a912482e29424709696e2e434b55d669fc7d2f7f3"} Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.428068 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.470396 4762 scope.go:117] "RemoveContainer" containerID="6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.473613 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.899928117 podStartE2EDuration="6.473596617s" podCreationTimestamp="2026-03-08 00:45:43 +0000 UTC" firstStartedPulling="2026-03-08 00:45:44.083432045 +0000 UTC m=+1365.557576389" lastFinishedPulling="2026-03-08 00:45:48.657100525 +0000 UTC m=+1370.131244889" observedRunningTime="2026-03-08 00:45:49.461532479 +0000 UTC m=+1370.935676823" watchObservedRunningTime="2026-03-08 00:45:49.473596617 +0000 UTC m=+1370.947740961" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.492095 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.504745 4762 scope.go:117] "RemoveContainer" containerID="50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.508820 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:49 crc kubenswrapper[4762]: E0308 00:45:49.512471 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10\": container with ID starting with 50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10 not found: ID does not exist" containerID="50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.512506 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10"} err="failed to get container status \"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10\": rpc error: code = NotFound desc = could not find container \"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10\": container with ID starting with 50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10 not found: ID does not exist" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.512533 4762 scope.go:117] "RemoveContainer" containerID="6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" Mar 08 00:45:49 crc kubenswrapper[4762]: E0308 00:45:49.512898 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a\": container with ID starting with 6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a not found: ID does not exist" containerID="6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.512919 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a"} err="failed to get container status \"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a\": rpc error: code = NotFound desc = could not find container \"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a\": container with ID starting with 6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a not found: ID does not exist" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.512934 4762 scope.go:117] "RemoveContainer" containerID="50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.521862 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10"} err="failed to get container status \"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10\": rpc error: code = NotFound desc = could not find container \"50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10\": container with ID starting with 50bcec5f16e6f6837439d0fe6dc1ad63b3f74143fbaa024114d428beefa9ef10 not found: ID does not exist" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.521897 4762 scope.go:117] "RemoveContainer" containerID="6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.522462 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a"} err="failed to get container status \"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a\": rpc error: code = NotFound desc = could not find container \"6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a\": container with ID starting with 6200ff5ea612868534c1a602f22af2d2710fd33957fd36136523d77dcf54b26a not found: ID does not exist" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.528100 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:49 crc kubenswrapper[4762]: E0308 00:45:49.528587 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api-log" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.528603 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api-log" Mar 08 00:45:49 crc kubenswrapper[4762]: E0308 00:45:49.528630 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.528636 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.528823 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api-log" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.528845 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" containerName="cinder-api" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.529882 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.532439 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.532527 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.534652 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.538659 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.674873 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.674914 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-scripts\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675004 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675453de-83d4-4420-a560-4a11696a849c-logs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675059 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675162 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675259 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dl97\" (UniqueName: \"kubernetes.io/projected/675453de-83d4-4420-a560-4a11696a849c-kube-api-access-7dl97\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675288 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/675453de-83d4-4420-a560-4a11696a849c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675701 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data-custom\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.675846 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778300 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data-custom\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778442 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778574 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778612 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-scripts\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778668 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675453de-83d4-4420-a560-4a11696a849c-logs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778699 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778739 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778828 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dl97\" (UniqueName: \"kubernetes.io/projected/675453de-83d4-4420-a560-4a11696a849c-kube-api-access-7dl97\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.778862 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/675453de-83d4-4420-a560-4a11696a849c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.779058 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/675453de-83d4-4420-a560-4a11696a849c-etc-machine-id\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.779507 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/675453de-83d4-4420-a560-4a11696a849c-logs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.782368 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data-custom\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.782657 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-config-data\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.784170 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.784402 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-public-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.785193 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.786126 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/675453de-83d4-4420-a560-4a11696a849c-scripts\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.799514 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dl97\" (UniqueName: \"kubernetes.io/projected/675453de-83d4-4420-a560-4a11696a849c-kube-api-access-7dl97\") pod \"cinder-api-0\" (UID: \"675453de-83d4-4420-a560-4a11696a849c\") " pod="openstack/cinder-api-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.825459 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 08 00:45:49 crc kubenswrapper[4762]: I0308 00:45:49.914175 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 08 00:45:50 crc kubenswrapper[4762]: W0308 00:45:50.406190 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod675453de_83d4_4420_a560_4a11696a849c.slice/crio-cc47e241d56ea158b8c09b701f715eb357e119bf986451dc469e5b20552c8552 WatchSource:0}: Error finding container cc47e241d56ea158b8c09b701f715eb357e119bf986451dc469e5b20552c8552: Status 404 returned error can't find the container with id cc47e241d56ea158b8c09b701f715eb357e119bf986451dc469e5b20552c8552 Mar 08 00:45:50 crc kubenswrapper[4762]: I0308 00:45:50.415201 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 08 00:45:50 crc kubenswrapper[4762]: I0308 00:45:50.450567 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"675453de-83d4-4420-a560-4a11696a849c","Type":"ContainerStarted","Data":"cc47e241d56ea158b8c09b701f715eb357e119bf986451dc469e5b20552c8552"} Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.013811 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd79d57f6-9kghw" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": read tcp 10.217.0.2:45482->10.217.0.194:9311: read: connection reset by peer" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.013811 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-fd79d57f6-9kghw" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.194:9311/healthcheck\": read tcp 10.217.0.2:45466->10.217.0.194:9311: read: connection reset by peer" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.279132 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6bd6720-7be6-4632-a01c-6478463ecb5b" path="/var/lib/kubelet/pods/d6bd6720-7be6-4632-a01c-6478463ecb5b/volumes" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.464033 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"675453de-83d4-4420-a560-4a11696a849c","Type":"ContainerStarted","Data":"d4f6098faf5baf096a17a2d5dcc56170a20c47a42325395208570c7da9e7c3d3"} Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.468867 4762 generic.go:334] "Generic (PLEG): container finished" podID="198d66d2-adcf-4028-9a59-9e396513f44d" containerID="7b77df23ed8910c4f0b3c488ab8c35c35c61352f2c472823ed1b74bcb47115f8" exitCode=0 Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.468903 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerDied","Data":"7b77df23ed8910c4f0b3c488ab8c35c35c61352f2c472823ed1b74bcb47115f8"} Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.579340 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.728522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data\") pod \"198d66d2-adcf-4028-9a59-9e396513f44d\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.728612 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgwfp\" (UniqueName: \"kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp\") pod \"198d66d2-adcf-4028-9a59-9e396513f44d\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.728786 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs\") pod \"198d66d2-adcf-4028-9a59-9e396513f44d\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.728831 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle\") pod \"198d66d2-adcf-4028-9a59-9e396513f44d\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.728868 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom\") pod \"198d66d2-adcf-4028-9a59-9e396513f44d\" (UID: \"198d66d2-adcf-4028-9a59-9e396513f44d\") " Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.729242 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs" (OuterVolumeSpecName: "logs") pod "198d66d2-adcf-4028-9a59-9e396513f44d" (UID: "198d66d2-adcf-4028-9a59-9e396513f44d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.729356 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/198d66d2-adcf-4028-9a59-9e396513f44d-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.735236 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "198d66d2-adcf-4028-9a59-9e396513f44d" (UID: "198d66d2-adcf-4028-9a59-9e396513f44d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.736075 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp" (OuterVolumeSpecName: "kube-api-access-xgwfp") pod "198d66d2-adcf-4028-9a59-9e396513f44d" (UID: "198d66d2-adcf-4028-9a59-9e396513f44d"). InnerVolumeSpecName "kube-api-access-xgwfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.777899 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "198d66d2-adcf-4028-9a59-9e396513f44d" (UID: "198d66d2-adcf-4028-9a59-9e396513f44d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.787353 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data" (OuterVolumeSpecName: "config-data") pod "198d66d2-adcf-4028-9a59-9e396513f44d" (UID: "198d66d2-adcf-4028-9a59-9e396513f44d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.831871 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.831910 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgwfp\" (UniqueName: \"kubernetes.io/projected/198d66d2-adcf-4028-9a59-9e396513f44d-kube-api-access-xgwfp\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.831922 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:51 crc kubenswrapper[4762]: I0308 00:45:51.831931 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/198d66d2-adcf-4028-9a59-9e396513f44d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.483106 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"675453de-83d4-4420-a560-4a11696a849c","Type":"ContainerStarted","Data":"fcdd7313b8939be08fa0c10833f1becb7244b3835bde77114cd98a240e3d00a7"} Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.483526 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.486640 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-fd79d57f6-9kghw" event={"ID":"198d66d2-adcf-4028-9a59-9e396513f44d","Type":"ContainerDied","Data":"a8f8a083cf06b482a32d488af2a550ae6e4fd1e322ee088703033342137641eb"} Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.486691 4762 scope.go:117] "RemoveContainer" containerID="7b77df23ed8910c4f0b3c488ab8c35c35c61352f2c472823ed1b74bcb47115f8" Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.486855 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-fd79d57f6-9kghw" Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.509732 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.509716671 podStartE2EDuration="3.509716671s" podCreationTimestamp="2026-03-08 00:45:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:52.505449841 +0000 UTC m=+1373.979594205" watchObservedRunningTime="2026-03-08 00:45:52.509716671 +0000 UTC m=+1373.983861015" Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.531587 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.539142 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-fd79d57f6-9kghw"] Mar 08 00:45:52 crc kubenswrapper[4762]: I0308 00:45:52.542480 4762 scope.go:117] "RemoveContainer" containerID="cc7decc1fab9af910422869646849ec26e4e8db62bea22fee54eb9ba87efb3c1" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.082064 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-849f745c8c-pjhz2" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.092532 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5575787c44-s8z4s_3876d14f-7657-46c3-90dd-145ba8955ccb/neutron-api/0.log" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.092638 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.169376 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.169416 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.169499 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.169566 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wr9v\" (UniqueName: \"kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.169677 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.176873 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.183369 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v" (OuterVolumeSpecName: "kube-api-access-4wr9v") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "kube-api-access-4wr9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.221773 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.222028 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc6ddb745-wdmcn" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-api" containerID="cri-o://ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74" gracePeriod=30 Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.224469 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-cc6ddb745-wdmcn" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-httpd" containerID="cri-o://1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b" gracePeriod=30 Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.274207 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config" (OuterVolumeSpecName: "config") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.284688 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") pod \"3876d14f-7657-46c3-90dd-145ba8955ccb\" (UID: \"3876d14f-7657-46c3-90dd-145ba8955ccb\") " Mar 08 00:45:53 crc kubenswrapper[4762]: W0308 00:45:53.288621 4762 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3876d14f-7657-46c3-90dd-145ba8955ccb/volumes/kubernetes.io~secret/config Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.288651 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config" (OuterVolumeSpecName: "config") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.305145 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.305404 4762 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.305415 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wr9v\" (UniqueName: \"kubernetes.io/projected/3876d14f-7657-46c3-90dd-145ba8955ccb-kube-api-access-4wr9v\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.309802 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" path="/var/lib/kubelet/pods/198d66d2-adcf-4028-9a59-9e396513f44d/volumes" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.320875 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.371545 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.382906 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3876d14f-7657-46c3-90dd-145ba8955ccb" (UID: "3876d14f-7657-46c3-90dd-145ba8955ccb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.397249 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.407384 4762 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.407428 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3876d14f-7657-46c3-90dd-145ba8955ccb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.498283 4762 generic.go:334] "Generic (PLEG): container finished" podID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerID="1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b" exitCode=0 Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.498374 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerDied","Data":"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b"} Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502405 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5575787c44-s8z4s_3876d14f-7657-46c3-90dd-145ba8955ccb/neutron-api/0.log" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502643 4762 generic.go:334] "Generic (PLEG): container finished" podID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerID="751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0" exitCode=137 Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502694 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerDied","Data":"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0"} Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502733 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5575787c44-s8z4s" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502792 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5575787c44-s8z4s" event={"ID":"3876d14f-7657-46c3-90dd-145ba8955ccb","Type":"ContainerDied","Data":"951136d34a90a041aa5c01b7f04db73f8c7dea7f0073dfbf58688e19240d1a64"} Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.502813 4762 scope.go:117] "RemoveContainer" containerID="59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.523840 4762 scope.go:117] "RemoveContainer" containerID="751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.541513 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.552197 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5575787c44-s8z4s"] Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.552232 4762 scope.go:117] "RemoveContainer" containerID="59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52" Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.552640 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52\": container with ID starting with 59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52 not found: ID does not exist" containerID="59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.552675 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52"} err="failed to get container status \"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52\": rpc error: code = NotFound desc = could not find container \"59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52\": container with ID starting with 59bd1eb6c0c53e592c24fa18ef3b1c14a1534e8912e049bc8ae01868f52e7d52 not found: ID does not exist" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.552698 4762 scope.go:117] "RemoveContainer" containerID="751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0" Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.552935 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0\": container with ID starting with 751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0 not found: ID does not exist" containerID="751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.552966 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0"} err="failed to get container status \"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0\": rpc error: code = NotFound desc = could not find container \"751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0\": container with ID starting with 751653adc35122cf51f8e830b2bfeedeed4d1b11a6f9bc4b8554427075f4ffa0 not found: ID does not exist" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.734383 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5b78746fdd-smtch"] Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.734842 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.734858 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.734876 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api-log" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.734882 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api-log" Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.734893 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-api" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.734902 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-api" Mar 08 00:45:53 crc kubenswrapper[4762]: E0308 00:45:53.734925 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.734933 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.735096 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api-log" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.735110 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-api" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.735121 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="198d66d2-adcf-4028-9a59-9e396513f44d" containerName="barbican-api" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.735137 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" containerName="neutron-httpd" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.736136 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.745149 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b78746fdd-smtch"] Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.866664 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-b9c87cdf8-vw485" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916123 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-logs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916199 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-internal-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916237 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-scripts\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916356 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-public-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916400 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-config-data\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916454 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-combined-ca-bundle\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:53 crc kubenswrapper[4762]: I0308 00:45:53.916497 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4q8k\" (UniqueName: \"kubernetes.io/projected/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-kube-api-access-p4q8k\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018388 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-combined-ca-bundle\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018492 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4q8k\" (UniqueName: \"kubernetes.io/projected/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-kube-api-access-p4q8k\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018584 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-logs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018630 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-internal-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018657 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-scripts\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018779 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-public-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.018835 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-config-data\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.019578 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-logs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.023399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-combined-ca-bundle\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.028535 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-internal-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.028989 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-public-tls-certs\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.030358 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-scripts\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.039084 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4q8k\" (UniqueName: \"kubernetes.io/projected/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-kube-api-access-p4q8k\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.041964 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/862ffdc5-bd1d-421a-9e37-0752fdf2c05f-config-data\") pod \"placement-5b78746fdd-smtch\" (UID: \"862ffdc5-bd1d-421a-9e37-0752fdf2c05f\") " pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.051457 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.730374 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5b78746fdd-smtch"] Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.884243 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:45:54 crc kubenswrapper[4762]: I0308 00:45:54.986130 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.005334 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="dnsmasq-dns" containerID="cri-o://52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc" gracePeriod=10 Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.051478 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.111145 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.292202 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3876d14f-7657-46c3-90dd-145ba8955ccb" path="/var/lib/kubelet/pods/3876d14f-7657-46c3-90dd-145ba8955ccb/volumes" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.468038 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.573932 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.574211 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldlrt\" (UniqueName: \"kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.574233 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.574294 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.574421 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.574448 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb\") pod \"845d8906-f82f-418a-88ca-8cd6087fafca\" (UID: \"845d8906-f82f-418a-88ca-8cd6087fafca\") " Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.582680 4762 generic.go:334] "Generic (PLEG): container finished" podID="845d8906-f82f-418a-88ca-8cd6087fafca" containerID="52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc" exitCode=0 Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.582789 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" event={"ID":"845d8906-f82f-418a-88ca-8cd6087fafca","Type":"ContainerDied","Data":"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc"} Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.582828 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" event={"ID":"845d8906-f82f-418a-88ca-8cd6087fafca","Type":"ContainerDied","Data":"fac738c3f3aac38a006e3c3b7ec656003c682572fa86046817c256085457aa07"} Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.582845 4762 scope.go:117] "RemoveContainer" containerID="52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.582978 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-lnczd" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.603062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt" (OuterVolumeSpecName: "kube-api-access-ldlrt") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "kube-api-access-ldlrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.607800 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="cinder-scheduler" containerID="cri-o://f8235f2de4c517589f97dfd0d2d3e2e95bb10bc573f45de5d2ed7d251f3b95f1" gracePeriod=30 Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608172 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="probe" containerID="cri-o://aa2a3e080bf1cc7f9b98e7a45f33c101edc30e2a6959b084d1d0e55b8a9b8d13" gracePeriod=30 Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608334 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b78746fdd-smtch" event={"ID":"862ffdc5-bd1d-421a-9e37-0752fdf2c05f","Type":"ContainerStarted","Data":"a6928b98633c87f8d438d6bc605c792e3d0e475117520b127f753d90d2b2caa2"} Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608383 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b78746fdd-smtch" event={"ID":"862ffdc5-bd1d-421a-9e37-0752fdf2c05f","Type":"ContainerStarted","Data":"430ce60de61244dcf6c2c2757582576e23ed7bff570dab543e5b163938e0208c"} Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608398 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5b78746fdd-smtch" event={"ID":"862ffdc5-bd1d-421a-9e37-0752fdf2c05f","Type":"ContainerStarted","Data":"c10c0c57d5dc429f41dcdbb747b319b39079ee6a86f56a91299ad1d9363ad442"} Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608487 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.608927 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.636113 4762 scope.go:117] "RemoveContainer" containerID="45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.668371 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5b78746fdd-smtch" podStartSLOduration=2.668351949 podStartE2EDuration="2.668351949s" podCreationTimestamp="2026-03-08 00:45:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:45:55.654962541 +0000 UTC m=+1377.129106885" watchObservedRunningTime="2026-03-08 00:45:55.668351949 +0000 UTC m=+1377.142496293" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.674373 4762 scope.go:117] "RemoveContainer" containerID="52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.676362 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldlrt\" (UniqueName: \"kubernetes.io/projected/845d8906-f82f-418a-88ca-8cd6087fafca-kube-api-access-ldlrt\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: E0308 00:45:55.677215 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc\": container with ID starting with 52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc not found: ID does not exist" containerID="52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.677334 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc"} err="failed to get container status \"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc\": rpc error: code = NotFound desc = could not find container \"52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc\": container with ID starting with 52684a07b094fdb73ebb9e8a1456bb831460d157f7227c7b636c90b14e27f7cc not found: ID does not exist" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.677415 4762 scope.go:117] "RemoveContainer" containerID="45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361" Mar 08 00:45:55 crc kubenswrapper[4762]: E0308 00:45:55.677863 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361\": container with ID starting with 45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361 not found: ID does not exist" containerID="45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.677893 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361"} err="failed to get container status \"45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361\": rpc error: code = NotFound desc = could not find container \"45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361\": container with ID starting with 45f3216608bde56fcc5b3e0ffadf963e3574c7e7f4420d36ae4606a23f6f6361 not found: ID does not exist" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.682744 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.684194 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.697148 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.700286 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config" (OuterVolumeSpecName: "config") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.705237 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "845d8906-f82f-418a-88ca-8cd6087fafca" (UID: "845d8906-f82f-418a-88ca-8cd6087fafca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.778383 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.778429 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.778447 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.778465 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.778484 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845d8906-f82f-418a-88ca-8cd6087fafca-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.920837 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:55 crc kubenswrapper[4762]: I0308 00:45:55.928353 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-lnczd"] Mar 08 00:45:56 crc kubenswrapper[4762]: I0308 00:45:56.623522 4762 generic.go:334] "Generic (PLEG): container finished" podID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerID="aa2a3e080bf1cc7f9b98e7a45f33c101edc30e2a6959b084d1d0e55b8a9b8d13" exitCode=0 Mar 08 00:45:56 crc kubenswrapper[4762]: I0308 00:45:56.623718 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerDied","Data":"aa2a3e080bf1cc7f9b98e7a45f33c101edc30e2a6959b084d1d0e55b8a9b8d13"} Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.282806 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" path="/var/lib/kubelet/pods/845d8906-f82f-418a-88ca-8cd6087fafca/volumes" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.436583 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610358 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610411 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610445 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610478 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610509 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610531 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nch6q\" (UniqueName: \"kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.610594 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs\") pod \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\" (UID: \"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827\") " Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.626954 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q" (OuterVolumeSpecName: "kube-api-access-nch6q") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "kube-api-access-nch6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.634925 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.647797 4762 generic.go:334] "Generic (PLEG): container finished" podID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerID="ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74" exitCode=0 Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.647851 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerDied","Data":"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74"} Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.647879 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-cc6ddb745-wdmcn" event={"ID":"3ab4ef1a-343c-4aeb-8f34-d4a46ae25827","Type":"ContainerDied","Data":"5f0d99b47935a5d55353ef94e87510c5796fd38fe6d0650502a7e2ee7661bbc8"} Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.647897 4762 scope.go:117] "RemoveContainer" containerID="1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.648066 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-cc6ddb745-wdmcn" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.673185 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.695973 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config" (OuterVolumeSpecName: "config") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.697933 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.706894 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712470 4762 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712499 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712510 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712519 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712527 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nch6q\" (UniqueName: \"kubernetes.io/projected/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-kube-api-access-nch6q\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.712536 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.715051 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" (UID: "3ab4ef1a-343c-4aeb-8f34-d4a46ae25827"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.761225 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.761682 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-api" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.761711 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-api" Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.761733 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="init" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.761741 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="init" Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.761800 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-httpd" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.761811 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-httpd" Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.761832 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="dnsmasq-dns" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.761838 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="dnsmasq-dns" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.762052 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-api" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.762082 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="845d8906-f82f-418a-88ca-8cd6087fafca" containerName="dnsmasq-dns" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.762095 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" containerName="neutron-httpd" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.762781 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.765677 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.765842 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.766346 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-bmtbj" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.801687 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.806322 4762 scope.go:117] "RemoveContainer" containerID="ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.814369 4762 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.833704 4762 scope.go:117] "RemoveContainer" containerID="1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b" Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.834171 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b\": container with ID starting with 1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b not found: ID does not exist" containerID="1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.834201 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b"} err="failed to get container status \"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b\": rpc error: code = NotFound desc = could not find container \"1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b\": container with ID starting with 1f0ecddfe95f16a396f116e15350bd909a31cf5009d8363c7dd10f9b4f72243b not found: ID does not exist" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.834219 4762 scope.go:117] "RemoveContainer" containerID="ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74" Mar 08 00:45:57 crc kubenswrapper[4762]: E0308 00:45:57.835507 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74\": container with ID starting with ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74 not found: ID does not exist" containerID="ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.835531 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74"} err="failed to get container status \"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74\": rpc error: code = NotFound desc = could not find container \"ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74\": container with ID starting with ff8363a67a1951b3052fbc19ff59c6e68173a5d969bcf4177fdd189d4bf4fd74 not found: ID does not exist" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.915889 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.916100 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.916190 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm7bs\" (UniqueName: \"kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:57 crc kubenswrapper[4762]: I0308 00:45:57.916241 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.022811 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.025112 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.025274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm7bs\" (UniqueName: \"kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.025360 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.025463 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.026590 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.031057 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.032357 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.063302 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-cc6ddb745-wdmcn"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.064209 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm7bs\" (UniqueName: \"kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs\") pod \"openstackclient\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.087857 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.114873 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.133350 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.148692 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.150665 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.171750 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.331915 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.331971 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config-secret\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.332036 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv2k5\" (UniqueName: \"kubernetes.io/projected/59501363-c16d-4d5b-97b4-42322e95ab83-kube-api-access-dv2k5\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.332093 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: E0308 00:45:58.359116 4762 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 08 00:45:58 crc kubenswrapper[4762]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_e2cf1284-8abe-43ee-9bc3-65a478753958_0(c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af" Netns:"/var/run/netns/8cd0371b-0996-443d-9511-137131cd82e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af;K8S_POD_UID=e2cf1284-8abe-43ee-9bc3-65a478753958" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/e2cf1284-8abe-43ee-9bc3-65a478753958]: expected pod UID "e2cf1284-8abe-43ee-9bc3-65a478753958" but got "59501363-c16d-4d5b-97b4-42322e95ab83" from Kube API Mar 08 00:45:58 crc kubenswrapper[4762]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 00:45:58 crc kubenswrapper[4762]: > Mar 08 00:45:58 crc kubenswrapper[4762]: E0308 00:45:58.359182 4762 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 08 00:45:58 crc kubenswrapper[4762]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_e2cf1284-8abe-43ee-9bc3-65a478753958_0(c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af" Netns:"/var/run/netns/8cd0371b-0996-443d-9511-137131cd82e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=c97a77d9475c63b9d34fdfb515c8f737f994a8c0ee38c8b479eaa2d5bc5ba3af;K8S_POD_UID=e2cf1284-8abe-43ee-9bc3-65a478753958" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/e2cf1284-8abe-43ee-9bc3-65a478753958]: expected pod UID "e2cf1284-8abe-43ee-9bc3-65a478753958" but got "59501363-c16d-4d5b-97b4-42322e95ab83" from Kube API Mar 08 00:45:58 crc kubenswrapper[4762]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 08 00:45:58 crc kubenswrapper[4762]: > pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.433910 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv2k5\" (UniqueName: \"kubernetes.io/projected/59501363-c16d-4d5b-97b4-42322e95ab83-kube-api-access-dv2k5\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.434005 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.434084 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.434111 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config-secret\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.435050 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.439090 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-openstack-config-secret\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.439310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59501363-c16d-4d5b-97b4-42322e95ab83-combined-ca-bundle\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.456377 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv2k5\" (UniqueName: \"kubernetes.io/projected/59501363-c16d-4d5b-97b4-42322e95ab83-kube-api-access-dv2k5\") pod \"openstackclient\" (UID: \"59501363-c16d-4d5b-97b4-42322e95ab83\") " pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.594401 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.662610 4762 generic.go:334] "Generic (PLEG): container finished" podID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerID="f8235f2de4c517589f97dfd0d2d3e2e95bb10bc573f45de5d2ed7d251f3b95f1" exitCode=0 Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.662699 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerDied","Data":"f8235f2de4c517589f97dfd0d2d3e2e95bb10bc573f45de5d2ed7d251f3b95f1"} Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.669466 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.685559 4762 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e2cf1284-8abe-43ee-9bc3-65a478753958" podUID="59501363-c16d-4d5b-97b4-42322e95ab83" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.690542 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.702045 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844381 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844448 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2ns6\" (UniqueName: \"kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844486 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret\") pod \"e2cf1284-8abe-43ee-9bc3-65a478753958\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844526 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844553 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844641 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config\") pod \"e2cf1284-8abe-43ee-9bc3-65a478753958\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844672 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm7bs\" (UniqueName: \"kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs\") pod \"e2cf1284-8abe-43ee-9bc3-65a478753958\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844697 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844716 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle\") pod \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\" (UID: \"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.844734 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle\") pod \"e2cf1284-8abe-43ee-9bc3-65a478753958\" (UID: \"e2cf1284-8abe-43ee-9bc3-65a478753958\") " Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.845021 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.845362 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.845516 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "e2cf1284-8abe-43ee-9bc3-65a478753958" (UID: "e2cf1284-8abe-43ee-9bc3-65a478753958"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.849410 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "e2cf1284-8abe-43ee-9bc3-65a478753958" (UID: "e2cf1284-8abe-43ee-9bc3-65a478753958"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.849942 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts" (OuterVolumeSpecName: "scripts") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.852633 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.852850 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs" (OuterVolumeSpecName: "kube-api-access-lm7bs") pod "e2cf1284-8abe-43ee-9bc3-65a478753958" (UID: "e2cf1284-8abe-43ee-9bc3-65a478753958"). InnerVolumeSpecName "kube-api-access-lm7bs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.854289 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2cf1284-8abe-43ee-9bc3-65a478753958" (UID: "e2cf1284-8abe-43ee-9bc3-65a478753958"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.855248 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6" (OuterVolumeSpecName: "kube-api-access-h2ns6") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "kube-api-access-h2ns6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.917917 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947127 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947167 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947180 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e2cf1284-8abe-43ee-9bc3-65a478753958-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947192 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm7bs\" (UniqueName: \"kubernetes.io/projected/e2cf1284-8abe-43ee-9bc3-65a478753958-kube-api-access-lm7bs\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947204 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2cf1284-8abe-43ee-9bc3-65a478753958-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947216 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947227 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.947239 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2ns6\" (UniqueName: \"kubernetes.io/projected/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-kube-api-access-h2ns6\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:58 crc kubenswrapper[4762]: I0308 00:45:58.966168 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data" (OuterVolumeSpecName: "config-data") pod "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" (UID: "c639ef5f-744f-418b-bfa5-1f4d3ec8b30e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.039959 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.048405 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.276286 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab4ef1a-343c-4aeb-8f34-d4a46ae25827" path="/var/lib/kubelet/pods/3ab4ef1a-343c-4aeb-8f34-d4a46ae25827/volumes" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.276986 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2cf1284-8abe-43ee-9bc3-65a478753958" path="/var/lib/kubelet/pods/e2cf1284-8abe-43ee-9bc3-65a478753958/volumes" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.685156 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"59501363-c16d-4d5b-97b4-42322e95ab83","Type":"ContainerStarted","Data":"4ab2f810c11e0a5e5c6c5eb348b6f24e51c510fcb0faadd59036fc6c6f65e5a8"} Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.688641 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.688707 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.688641 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"c639ef5f-744f-418b-bfa5-1f4d3ec8b30e","Type":"ContainerDied","Data":"9374969cd6f9daaab61c357a49d98bb5ef7144890f0f45aac09c55531d54cc54"} Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.689883 4762 scope.go:117] "RemoveContainer" containerID="aa2a3e080bf1cc7f9b98e7a45f33c101edc30e2a6959b084d1d0e55b8a9b8d13" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.700464 4762 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="e2cf1284-8abe-43ee-9bc3-65a478753958" podUID="59501363-c16d-4d5b-97b4-42322e95ab83" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.734827 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.740647 4762 scope.go:117] "RemoveContainer" containerID="f8235f2de4c517589f97dfd0d2d3e2e95bb10bc573f45de5d2ed7d251f3b95f1" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.750179 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.757916 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:59 crc kubenswrapper[4762]: E0308 00:45:59.758410 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="cinder-scheduler" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.758430 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="cinder-scheduler" Mar 08 00:45:59 crc kubenswrapper[4762]: E0308 00:45:59.758463 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="probe" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.758471 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="probe" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.758667 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="probe" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.758707 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" containerName="cinder-scheduler" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.759897 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.762340 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.768139 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.860933 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhg6l\" (UniqueName: \"kubernetes.io/projected/af0c65d2-782a-49ee-a867-296757df295b-kube-api-access-qhg6l\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.861004 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.861052 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-scripts\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.861079 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af0c65d2-782a-49ee-a867-296757df295b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.861093 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.861108 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.962770 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-scripts\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.962831 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af0c65d2-782a-49ee-a867-296757df295b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.962856 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.962877 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.963027 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhg6l\" (UniqueName: \"kubernetes.io/projected/af0c65d2-782a-49ee-a867-296757df295b-kube-api-access-qhg6l\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.963067 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.967536 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/af0c65d2-782a-49ee-a867-296757df295b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.968381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-scripts\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.970881 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.971746 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.972377 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/af0c65d2-782a-49ee-a867-296757df295b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:45:59 crc kubenswrapper[4762]: I0308 00:45:59.989400 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhg6l\" (UniqueName: \"kubernetes.io/projected/af0c65d2-782a-49ee-a867-296757df295b-kube-api-access-qhg6l\") pod \"cinder-scheduler-0\" (UID: \"af0c65d2-782a-49ee-a867-296757df295b\") " pod="openstack/cinder-scheduler-0" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.083420 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.149812 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548846-9shbs"] Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.151122 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.164833 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548846-9shbs"] Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.169526 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.169776 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.173284 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.288933 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lnsb\" (UniqueName: \"kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb\") pod \"auto-csr-approver-29548846-9shbs\" (UID: \"6708e2b7-0087-40fd-947c-b5c7adb5dcd9\") " pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.391788 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lnsb\" (UniqueName: \"kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb\") pod \"auto-csr-approver-29548846-9shbs\" (UID: \"6708e2b7-0087-40fd-947c-b5c7adb5dcd9\") " pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.425935 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lnsb\" (UniqueName: \"kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb\") pod \"auto-csr-approver-29548846-9shbs\" (UID: \"6708e2b7-0087-40fd-947c-b5c7adb5dcd9\") " pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.533769 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.653850 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 08 00:46:00 crc kubenswrapper[4762]: I0308 00:46:00.733967 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af0c65d2-782a-49ee-a867-296757df295b","Type":"ContainerStarted","Data":"663ae247a483656d58cc835067486e613f6d750aefc3be30b799e232718a2e9a"} Mar 08 00:46:01 crc kubenswrapper[4762]: I0308 00:46:01.125566 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548846-9shbs"] Mar 08 00:46:01 crc kubenswrapper[4762]: I0308 00:46:01.282061 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c639ef5f-744f-418b-bfa5-1f4d3ec8b30e" path="/var/lib/kubelet/pods/c639ef5f-744f-418b-bfa5-1f4d3ec8b30e/volumes" Mar 08 00:46:01 crc kubenswrapper[4762]: I0308 00:46:01.746536 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af0c65d2-782a-49ee-a867-296757df295b","Type":"ContainerStarted","Data":"053476c361f5549217982b407d66f0468dd9c13452fc9794567dafd203edb59e"} Mar 08 00:46:01 crc kubenswrapper[4762]: I0308 00:46:01.753286 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548846-9shbs" event={"ID":"6708e2b7-0087-40fd-947c-b5c7adb5dcd9","Type":"ContainerStarted","Data":"f09074069f4a7b8c05ffcaaf05aa7f858aea6e562b1f6e0a000508e650735d18"} Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.151662 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.388208 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6d78d68b57-45zwj"] Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.392637 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.400101 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.400311 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.400431 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.401726 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d78d68b57-45zwj"] Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543300 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-internal-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-log-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543486 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-run-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543539 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-combined-ca-bundle\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543624 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-etc-swift\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543684 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-config-data\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543736 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnc7l\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-kube-api-access-vnc7l\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.543777 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-public-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645822 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-combined-ca-bundle\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645890 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-etc-swift\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645909 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-config-data\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645930 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnc7l\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-kube-api-access-vnc7l\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645957 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-public-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.645989 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-internal-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.646022 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-log-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.646068 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-run-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.646449 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-run-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.657547 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-combined-ca-bundle\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.657986 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-log-httpd\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.663526 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-config-data\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.663811 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-etc-swift\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.664263 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-public-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.666630 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-internal-tls-certs\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.670405 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnc7l\" (UniqueName: \"kubernetes.io/projected/fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa-kube-api-access-vnc7l\") pod \"swift-proxy-6d78d68b57-45zwj\" (UID: \"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa\") " pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.729959 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.773900 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af0c65d2-782a-49ee-a867-296757df295b","Type":"ContainerStarted","Data":"2cf9b17ecc0c3ad772db4989b5c2dc09d372f14b8daff937d7b0af4e596bcd06"} Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.776482 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548846-9shbs" event={"ID":"6708e2b7-0087-40fd-947c-b5c7adb5dcd9","Type":"ContainerStarted","Data":"1818ba3c6001fd50ef6679846e58d75a21828782350520c3d35f275564612aad"} Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.792686 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.792666418 podStartE2EDuration="3.792666418s" podCreationTimestamp="2026-03-08 00:45:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:02.792214755 +0000 UTC m=+1384.266359099" watchObservedRunningTime="2026-03-08 00:46:02.792666418 +0000 UTC m=+1384.266810762" Mar 08 00:46:02 crc kubenswrapper[4762]: I0308 00:46:02.814850 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548846-9shbs" podStartSLOduration=1.890983409 podStartE2EDuration="2.814831503s" podCreationTimestamp="2026-03-08 00:46:00 +0000 UTC" firstStartedPulling="2026-03-08 00:46:01.144113299 +0000 UTC m=+1382.618257643" lastFinishedPulling="2026-03-08 00:46:02.067961393 +0000 UTC m=+1383.542105737" observedRunningTime="2026-03-08 00:46:02.805208251 +0000 UTC m=+1384.279352595" watchObservedRunningTime="2026-03-08 00:46:02.814831503 +0000 UTC m=+1384.288975847" Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.330985 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6d78d68b57-45zwj"] Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.790995 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d78d68b57-45zwj" event={"ID":"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa","Type":"ContainerStarted","Data":"93579a3edba3e15f8a798cdd64f46a13c8635935f70d954c4c34871efff11cee"} Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.791477 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d78d68b57-45zwj" event={"ID":"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa","Type":"ContainerStarted","Data":"27139c39f7463fa4445c308c3bb08f9547257ae6ad66f2108478416da0a4bd91"} Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.800800 4762 generic.go:334] "Generic (PLEG): container finished" podID="6708e2b7-0087-40fd-947c-b5c7adb5dcd9" containerID="1818ba3c6001fd50ef6679846e58d75a21828782350520c3d35f275564612aad" exitCode=0 Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.800862 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548846-9shbs" event={"ID":"6708e2b7-0087-40fd-947c-b5c7adb5dcd9","Type":"ContainerDied","Data":"1818ba3c6001fd50ef6679846e58d75a21828782350520c3d35f275564612aad"} Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.917942 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.922132 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:03 crc kubenswrapper[4762]: I0308 00:46:03.927613 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.077227 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.077277 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.077708 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gljhw\" (UniqueName: \"kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.178932 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gljhw\" (UniqueName: \"kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.178975 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.178998 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.179495 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.179949 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.203620 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gljhw\" (UniqueName: \"kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw\") pod \"redhat-operators-gktkm\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.246140 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.732781 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.868525 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6d78d68b57-45zwj" event={"ID":"fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa","Type":"ContainerStarted","Data":"c5ae4448605f978bf1d6ad98a61f0f0fe5f6727e1b9b0881b2854138628b210d"} Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.869654 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.869684 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.871501 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerStarted","Data":"ec3376a6995b02bc9ef85e372f9fb869faba052e49c8dd264ce08b7ccee7a700"} Mar 08 00:46:04 crc kubenswrapper[4762]: I0308 00:46:04.907175 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6d78d68b57-45zwj" podStartSLOduration=2.907147486 podStartE2EDuration="2.907147486s" podCreationTimestamp="2026-03-08 00:46:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:04.900809913 +0000 UTC m=+1386.374954257" watchObservedRunningTime="2026-03-08 00:46:04.907147486 +0000 UTC m=+1386.381291830" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.087270 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.373496 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.502824 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lnsb\" (UniqueName: \"kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb\") pod \"6708e2b7-0087-40fd-947c-b5c7adb5dcd9\" (UID: \"6708e2b7-0087-40fd-947c-b5c7adb5dcd9\") " Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.509812 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb" (OuterVolumeSpecName: "kube-api-access-9lnsb") pod "6708e2b7-0087-40fd-947c-b5c7adb5dcd9" (UID: "6708e2b7-0087-40fd-947c-b5c7adb5dcd9"). InnerVolumeSpecName "kube-api-access-9lnsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.605918 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lnsb\" (UniqueName: \"kubernetes.io/projected/6708e2b7-0087-40fd-947c-b5c7adb5dcd9-kube-api-access-9lnsb\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.879749 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548840-hjssb"] Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.909123 4762 generic.go:334] "Generic (PLEG): container finished" podID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerID="7e0a5928d27937eb1ed7b47c685528e5bf7ef4b0ffd9cb8473b82d761ad4dd82" exitCode=0 Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.909222 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerDied","Data":"7e0a5928d27937eb1ed7b47c685528e5bf7ef4b0ffd9cb8473b82d761ad4dd82"} Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.917100 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548846-9shbs" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.917625 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548846-9shbs" event={"ID":"6708e2b7-0087-40fd-947c-b5c7adb5dcd9","Type":"ContainerDied","Data":"f09074069f4a7b8c05ffcaaf05aa7f858aea6e562b1f6e0a000508e650735d18"} Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.917682 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f09074069f4a7b8c05ffcaaf05aa7f858aea6e562b1f6e0a000508e650735d18" Mar 08 00:46:05 crc kubenswrapper[4762]: I0308 00:46:05.925009 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548840-hjssb"] Mar 08 00:46:06 crc kubenswrapper[4762]: I0308 00:46:06.929822 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerStarted","Data":"a0733e093453af09e168a34e892daa07ac2b89fa65c4d15e5ae1c23d6c020ac7"} Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.065310 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:07 crc kubenswrapper[4762]: E0308 00:46:07.065817 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6708e2b7-0087-40fd-947c-b5c7adb5dcd9" containerName="oc" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.065842 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6708e2b7-0087-40fd-947c-b5c7adb5dcd9" containerName="oc" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.066133 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6708e2b7-0087-40fd-947c-b5c7adb5dcd9" containerName="oc" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.067085 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.069772 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.069846 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.069958 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-zt4lp" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.088146 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.133044 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.134247 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.149940 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.164774 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.191754 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.198500 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.211952 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240582 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240621 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h928l\" (UniqueName: \"kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240641 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240707 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240809 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240832 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240852 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llm44\" (UniqueName: \"kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.240887 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.299377 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe7a9f3e-b771-4b5f-93ed-3092375d617e" path="/var/lib/kubelet/pods/fe7a9f3e-b771-4b5f-93ed-3092375d617e/volumes" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.335100 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.336467 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.339617 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342342 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342404 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342442 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342527 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342556 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h928l\" (UniqueName: \"kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342613 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342714 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342786 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rg2\" (UniqueName: \"kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.342814 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.343559 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.344413 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.344670 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.344736 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.344823 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llm44\" (UniqueName: \"kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.352914 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.359861 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.361327 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.361596 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.362619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.365276 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.369918 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h928l\" (UniqueName: \"kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l\") pod \"heat-engine-5d49b888c4-hqhhl\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.370022 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.370392 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llm44\" (UniqueName: \"kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44\") pod \"heat-cfnapi-bccc47696-bz777\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.419291 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.446908 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.446952 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.446973 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447039 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447060 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjww9\" (UniqueName: \"kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447131 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447270 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rg2\" (UniqueName: \"kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447295 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447352 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.447391 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.448354 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.449261 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.449974 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.450706 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.450921 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.482535 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.483800 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.484212 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-central-agent" containerID="cri-o://c104e693bb0acf095a9e8a0c4f40c2f743215c91460876e32786634352dd9cbc" gracePeriod=30 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.484675 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-notification-agent" containerID="cri-o://01a2601dc3b06c4691de40776acdb16f2022d910d19fd4ad4f22b53dbc22e68d" gracePeriod=30 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.484729 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="sg-core" containerID="cri-o://f5c8f3df74edc4e7bdfe58659bdaef728aae86bc6914e4be290b59b411c8b844" gracePeriod=30 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.486210 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" containerID="cri-o://94f3b589ce361ec6a846395a912482e29424709696e2e434b55d669fc7d2f7f3" gracePeriod=30 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.494740 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rg2\" (UniqueName: \"kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2\") pod \"dnsmasq-dns-688b9f5b49-9ldwb\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.499042 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": EOF" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.549974 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.550301 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.550351 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.550376 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjww9\" (UniqueName: \"kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.557468 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.557399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.567995 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.568541 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjww9\" (UniqueName: \"kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9\") pod \"heat-api-576bfcb8cc-cr5zz\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.571233 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.756244 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.941025 4762 generic.go:334] "Generic (PLEG): container finished" podID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerID="f5c8f3df74edc4e7bdfe58659bdaef728aae86bc6914e4be290b59b411c8b844" exitCode=2 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.941095 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerDied","Data":"f5c8f3df74edc4e7bdfe58659bdaef728aae86bc6914e4be290b59b411c8b844"} Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.943798 4762 generic.go:334] "Generic (PLEG): container finished" podID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerID="a0733e093453af09e168a34e892daa07ac2b89fa65c4d15e5ae1c23d6c020ac7" exitCode=0 Mar 08 00:46:07 crc kubenswrapper[4762]: I0308 00:46:07.943833 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerDied","Data":"a0733e093453af09e168a34e892daa07ac2b89fa65c4d15e5ae1c23d6c020ac7"} Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.960842 4762 generic.go:334] "Generic (PLEG): container finished" podID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerID="94f3b589ce361ec6a846395a912482e29424709696e2e434b55d669fc7d2f7f3" exitCode=0 Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.961583 4762 generic.go:334] "Generic (PLEG): container finished" podID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerID="01a2601dc3b06c4691de40776acdb16f2022d910d19fd4ad4f22b53dbc22e68d" exitCode=0 Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.961598 4762 generic.go:334] "Generic (PLEG): container finished" podID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerID="c104e693bb0acf095a9e8a0c4f40c2f743215c91460876e32786634352dd9cbc" exitCode=0 Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.960914 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerDied","Data":"94f3b589ce361ec6a846395a912482e29424709696e2e434b55d669fc7d2f7f3"} Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.961664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerDied","Data":"01a2601dc3b06c4691de40776acdb16f2022d910d19fd4ad4f22b53dbc22e68d"} Mar 08 00:46:08 crc kubenswrapper[4762]: I0308 00:46:08.961713 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerDied","Data":"c104e693bb0acf095a9e8a0c4f40c2f743215c91460876e32786634352dd9cbc"} Mar 08 00:46:10 crc kubenswrapper[4762]: I0308 00:46:10.333098 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.737470 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.740082 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6d78d68b57-45zwj" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.851368 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-smtf6"] Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.853072 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.867720 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smtf6"] Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.922408 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-nzmhb"] Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.923751 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.935013 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nzmhb"] Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.979179 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8q4d\" (UniqueName: \"kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:12 crc kubenswrapper[4762]: I0308 00:46:12.979331 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.059803 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8f67-account-create-update-k6g6t"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.062717 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.068583 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.080044 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f67-account-create-update-k6g6t"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.081262 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.081302 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.081418 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8q4d\" (UniqueName: \"kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.081445 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njz5g\" (UniqueName: \"kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.083929 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.150422 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8q4d\" (UniqueName: \"kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d\") pod \"nova-api-db-create-smtf6\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.170821 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pd5j4"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.172151 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.183476 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfr45\" (UniqueName: \"kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.183519 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.183556 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njz5g\" (UniqueName: \"kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.183673 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.185297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.191020 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.195201 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pd5j4"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.262401 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njz5g\" (UniqueName: \"kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g\") pod \"nova-cell0-db-create-nzmhb\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.269947 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.294921 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfr45\" (UniqueName: \"kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.294971 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.295017 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhcv\" (UniqueName: \"kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.295069 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.296006 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.322511 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfr45\" (UniqueName: \"kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45\") pod \"nova-api-8f67-account-create-update-k6g6t\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.322751 4762 scope.go:117] "RemoveContainer" containerID="82bc28b1cfd51e7f29bdfcebcf3c0a11cd81837732695b690ade868be0942473" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.328272 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-24e5-account-create-update-25drx"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.333906 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-24e5-account-create-update-25drx"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.334003 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.336325 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.397669 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhcv\" (UniqueName: \"kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.397796 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.400550 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.410581 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.419381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhcv\" (UniqueName: \"kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv\") pod \"nova-cell1-db-create-pd5j4\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.447370 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-98ce-account-create-update-bz5cs"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.448580 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.454200 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.474323 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-98ce-account-create-update-bz5cs"] Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.509531 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.510007 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsd6s\" (UniqueName: \"kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.577031 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.612211 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zj6n\" (UniqueName: \"kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.612305 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsd6s\" (UniqueName: \"kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.612336 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.612439 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.613217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.624073 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": dial tcp 10.217.0.197:3000: connect: connection refused" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.628236 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsd6s\" (UniqueName: \"kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s\") pod \"nova-cell0-24e5-account-create-update-25drx\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.678449 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.714143 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zj6n\" (UniqueName: \"kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.714234 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.715323 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.732005 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zj6n\" (UniqueName: \"kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n\") pod \"nova-cell1-98ce-account-create-update-bz5cs\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:13 crc kubenswrapper[4762]: I0308 00:46:13.816304 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.770621 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.774588 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.781968 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.881668 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.883081 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.893751 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.895348 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.921315 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.941497 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.949925 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.950058 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njmm6\" (UniqueName: \"kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.950272 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:14 crc kubenswrapper[4762]: I0308 00:46:14.950383 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052532 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052603 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052628 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njmm6\" (UniqueName: \"kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052670 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjlmr\" (UniqueName: \"kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052714 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052740 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052822 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052855 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p242k\" (UniqueName: \"kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052890 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052912 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052929 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.052957 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.060398 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.061246 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.067527 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.074302 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njmm6\" (UniqueName: \"kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6\") pod \"heat-engine-68f674dbc4-pt9pq\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.129263 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154253 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154307 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjlmr\" (UniqueName: \"kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154339 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154389 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p242k\" (UniqueName: \"kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154423 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.154444 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.155091 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.155426 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.158211 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.158508 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.158623 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.161100 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.164629 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.173453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.175428 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjlmr\" (UniqueName: \"kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr\") pod \"heat-cfnapi-68c5fb77db-mz745\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.181037 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p242k\" (UniqueName: \"kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k\") pod \"heat-api-57f8d966d9-594sf\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.200089 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:15 crc kubenswrapper[4762]: I0308 00:46:15.210810 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:16 crc kubenswrapper[4762]: I0308 00:46:16.552021 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pd5j4"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.044805 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.074983 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pd5j4" event={"ID":"a9551f6c-a71d-44b6-adb7-fe69e9c4f259","Type":"ContainerStarted","Data":"663bf90ab8586ab8069779c9802314deb4e10543b912dbb563bc232405df9ddd"} Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.092141 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7606b7b7-2804-4c3f-b617-34e50d83c068","Type":"ContainerDied","Data":"5355440d5b4a3c7bee4c0c6be7e41e3952953aca41d8ef3a1149c2aba971054f"} Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.092201 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.092210 4762 scope.go:117] "RemoveContainer" containerID="94f3b589ce361ec6a846395a912482e29424709696e2e434b55d669fc7d2f7f3" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.114521 4762 scope.go:117] "RemoveContainer" containerID="f5c8f3df74edc4e7bdfe58659bdaef728aae86bc6914e4be290b59b411c8b844" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.147906 4762 scope.go:117] "RemoveContainer" containerID="01a2601dc3b06c4691de40776acdb16f2022d910d19fd4ad4f22b53dbc22e68d" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.173122 4762 scope.go:117] "RemoveContainer" containerID="c104e693bb0acf095a9e8a0c4f40c2f743215c91460876e32786634352dd9cbc" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218648 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218804 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218829 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218874 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218953 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t2l9\" (UniqueName: \"kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.218976 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.219009 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd\") pod \"7606b7b7-2804-4c3f-b617-34e50d83c068\" (UID: \"7606b7b7-2804-4c3f-b617-34e50d83c068\") " Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.219724 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.219998 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.220029 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.250903 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts" (OuterVolumeSpecName: "scripts") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.266918 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9" (OuterVolumeSpecName: "kube-api-access-7t2l9") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "kube-api-access-7t2l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.285027 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.324734 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t2l9\" (UniqueName: \"kubernetes.io/projected/7606b7b7-2804-4c3f-b617-34e50d83c068-kube-api-access-7t2l9\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.324787 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.324801 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7606b7b7-2804-4c3f-b617-34e50d83c068-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.324813 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.392124 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.392166 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.402554 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:46:17 crc kubenswrapper[4762]: E0308 00:46:17.403059 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-notification-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403076 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-notification-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: E0308 00:46:17.403102 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-central-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403110 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-central-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: E0308 00:46:17.403128 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="sg-core" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403135 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="sg-core" Mar 08 00:46:17 crc kubenswrapper[4762]: E0308 00:46:17.403146 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403152 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403340 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="proxy-httpd" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403353 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-central-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403371 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="ceilometer-notification-agent" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.403386 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" containerName="sg-core" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.404074 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.406655 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.406814 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.413381 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.414738 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.416493 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.416544 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.416630 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.423833 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data" (OuterVolumeSpecName: "config-data") pod "7606b7b7-2804-4c3f-b617-34e50d83c068" (UID: "7606b7b7-2804-4c3f-b617-34e50d83c068"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.426456 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.426480 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7606b7b7-2804-4c3f-b617-34e50d83c068-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.437026 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.454156 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533271 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpwlt\" (UniqueName: \"kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533582 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533606 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklmh\" (UniqueName: \"kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533626 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533702 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533724 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533744 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533794 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533812 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533846 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.533868 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.543613 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637413 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637489 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637531 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637554 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637579 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637607 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637623 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637660 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637681 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637900 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpwlt\" (UniqueName: \"kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637929 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.637955 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kklmh\" (UniqueName: \"kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.643182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.643815 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.644182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.645456 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.645619 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.646006 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.646510 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.649445 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.653496 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.657939 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.658792 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kklmh\" (UniqueName: \"kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh\") pod \"heat-cfnapi-66558fbb95-w94pj\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.659442 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpwlt\" (UniqueName: \"kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt\") pod \"heat-api-557f98fcf9-zd48x\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.757067 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.764276 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-nzmhb"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.774371 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.776220 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.806904 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.815806 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-24e5-account-create-update-25drx"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.827331 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.839857 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.843613 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.848584 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.849536 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.856942 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945084 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945481 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945587 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7kl\" (UniqueName: \"kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945636 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945651 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.945734 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:17 crc kubenswrapper[4762]: I0308 00:46:17.946021 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048150 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7kl\" (UniqueName: \"kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048205 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048227 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048296 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048361 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048387 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048407 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.048850 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.049391 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.059865 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.067230 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.067416 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.067524 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.074317 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7kl\" (UniqueName: \"kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl\") pod \"ceilometer-0\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.145223 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68f674dbc4-pt9pq" event={"ID":"e2965295-b595-4655-80c6-daa506d337c7","Type":"ContainerStarted","Data":"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.145552 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68f674dbc4-pt9pq" event={"ID":"e2965295-b595-4655-80c6-daa506d337c7","Type":"ContainerStarted","Data":"991c2f25f8ab7cda162a82d47d414919eaf365482a10972e582943627ee61a2c"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.145600 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.154620 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerStarted","Data":"82544994f138d07437cafaf075d5e9cddd7693235be313541e40a1f367ce47dd"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.160102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"59501363-c16d-4d5b-97b4-42322e95ab83","Type":"ContainerStarted","Data":"4ecba075ca8615327c1da02e268091568ef80c920f05d525eb6d80ca2d1bbef3"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.176265 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.198263 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-24e5-account-create-update-25drx" event={"ID":"4a25763c-610d-42ef-ad0e-79c540318681","Type":"ContainerStarted","Data":"47bc565248bff0e09c955e1e7cb147379b58f8dc41cfb007b4daba23743668ba"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.198300 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-24e5-account-create-update-25drx" event={"ID":"4a25763c-610d-42ef-ad0e-79c540318681","Type":"ContainerStarted","Data":"d9f1400e612a9bb37b84b6f985010243bef77a12058c53618ff9adabc6290e82"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.210687 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68f674dbc4-pt9pq" podStartSLOduration=4.210661713 podStartE2EDuration="4.210661713s" podCreationTimestamp="2026-03-08 00:46:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:18.175148081 +0000 UTC m=+1399.649292425" watchObservedRunningTime="2026-03-08 00:46:18.210661713 +0000 UTC m=+1399.684806057" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.230952 4762 generic.go:334] "Generic (PLEG): container finished" podID="a9551f6c-a71d-44b6-adb7-fe69e9c4f259" containerID="349d4e422e67335ccfdd9dcd824d014d8e2f4239f458dda690b7c9c13aedef44" exitCode=0 Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.231075 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pd5j4" event={"ID":"a9551f6c-a71d-44b6-adb7-fe69e9c4f259","Type":"ContainerDied","Data":"349d4e422e67335ccfdd9dcd824d014d8e2f4239f458dda690b7c9c13aedef44"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.237214 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68c5fb77db-mz745" event={"ID":"45e01f25-822e-454b-a7c8-e43ffd1feb56","Type":"ContainerStarted","Data":"9394060e21f35ecf800ac5c7003ac9ef1cbb29fee8daf0b8e40dc75be32537cf"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.247024 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzmhb" event={"ID":"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f","Type":"ContainerStarted","Data":"1dc9acc55c9a8928eb946923df70d186dd640b3b6105601cb8d31d26fb221bc3"} Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.284183 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-smtf6"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.318178 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gktkm" podStartSLOduration=4.561990381 podStartE2EDuration="15.31815859s" podCreationTimestamp="2026-03-08 00:46:03 +0000 UTC" firstStartedPulling="2026-03-08 00:46:05.913464963 +0000 UTC m=+1387.387609307" lastFinishedPulling="2026-03-08 00:46:16.669633172 +0000 UTC m=+1398.143777516" observedRunningTime="2026-03-08 00:46:18.205149735 +0000 UTC m=+1399.679294079" watchObservedRunningTime="2026-03-08 00:46:18.31815859 +0000 UTC m=+1399.792302934" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.345102 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.354018 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.378027 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-98ce-account-create-update-bz5cs"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.386595 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.378014007 podStartE2EDuration="20.386578414s" podCreationTimestamp="2026-03-08 00:45:58 +0000 UTC" firstStartedPulling="2026-03-08 00:45:59.047148206 +0000 UTC m=+1380.521292550" lastFinishedPulling="2026-03-08 00:46:16.055712613 +0000 UTC m=+1397.529856957" observedRunningTime="2026-03-08 00:46:18.256539042 +0000 UTC m=+1399.730683386" watchObservedRunningTime="2026-03-08 00:46:18.386578414 +0000 UTC m=+1399.860722758" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.392214 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-24e5-account-create-update-25drx" podStartSLOduration=5.392200986 podStartE2EDuration="5.392200986s" podCreationTimestamp="2026-03-08 00:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:18.280405399 +0000 UTC m=+1399.754549743" watchObservedRunningTime="2026-03-08 00:46:18.392200986 +0000 UTC m=+1399.866345320" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.411973 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8f67-account-create-update-k6g6t"] Mar 08 00:46:18 crc kubenswrapper[4762]: W0308 00:46:18.415943 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49e2f802_447d_40ad_b7a5_b9530b0f9289.slice/crio-6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f WatchSource:0}: Error finding container 6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f: Status 404 returned error can't find the container with id 6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.427190 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.437116 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.470483 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.532012 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-nzmhb" podStartSLOduration=6.531989946 podStartE2EDuration="6.531989946s" podCreationTimestamp="2026-03-08 00:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:18.336981933 +0000 UTC m=+1399.811126287" watchObservedRunningTime="2026-03-08 00:46:18.531989946 +0000 UTC m=+1400.006134290" Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.587702 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.629904 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:46:18 crc kubenswrapper[4762]: I0308 00:46:18.951898 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:19 crc kubenswrapper[4762]: W0308 00:46:19.016961 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod703d06da_b287_4009_b69b_f24d3b583a7a.slice/crio-f0a5ec7433eee664f1883d2558ea60a31252006199f6ba10066c646e91bcd4e6 WatchSource:0}: Error finding container f0a5ec7433eee664f1883d2558ea60a31252006199f6ba10066c646e91bcd4e6: Status 404 returned error can't find the container with id f0a5ec7433eee664f1883d2558ea60a31252006199f6ba10066c646e91bcd4e6 Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.284476 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7606b7b7-2804-4c3f-b617-34e50d83c068" path="/var/lib/kubelet/pods/7606b7b7-2804-4c3f-b617-34e50d83c068/volumes" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.285553 4762 generic.go:334] "Generic (PLEG): container finished" podID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerID="4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2" exitCode=0 Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.287309 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerStarted","Data":"f0a5ec7433eee664f1883d2558ea60a31252006199f6ba10066c646e91bcd4e6"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.287344 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-576bfcb8cc-cr5zz" event={"ID":"49e2f802-447d-40ad-b7a5-b9530b0f9289","Type":"ContainerStarted","Data":"6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.287362 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" event={"ID":"9f0b97b7-6c28-4a8b-99d7-242dde839d36","Type":"ContainerDied","Data":"4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.287376 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" event={"ID":"9f0b97b7-6c28-4a8b-99d7-242dde839d36","Type":"ContainerStarted","Data":"1b72ffc09e07f22fe0fa48cdce8fa494f89d130bd86cec83b9160981711581fc"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.306153 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smtf6" event={"ID":"a7fac32d-8f26-459b-a67e-592f1e292d80","Type":"ContainerStarted","Data":"6a57f6a58b9b035001322a51bdc185f7977090611bec71a327e59149b9bef547"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.306193 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smtf6" event={"ID":"a7fac32d-8f26-459b-a67e-592f1e292d80","Type":"ContainerStarted","Data":"a1e32d0e030797abb8e1b055975abe5b55c371afd8835a4da9b879a49ebb1af3"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.312737 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f67-account-create-update-k6g6t" event={"ID":"bcd021de-3952-4553-ae54-5c244346412a","Type":"ContainerStarted","Data":"76c8a4319c56c8e9d3fd72fde6141133739516507d6c80a407b949f3d39a6d3b"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.312804 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f67-account-create-update-k6g6t" event={"ID":"bcd021de-3952-4553-ae54-5c244346412a","Type":"ContainerStarted","Data":"d646231de5254fef53a8cbbbc6a91779af41899029b0778d607e764893cc32bb"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.328283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bccc47696-bz777" event={"ID":"e1f4db23-d669-492d-94a3-1d6538f754e8","Type":"ContainerStarted","Data":"62bf3208cb5bda315d5888d48322748f422c826d40b11854b829f0a14668a992"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.335049 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d49b888c4-hqhhl" event={"ID":"d09d0f2b-e914-4662-9fce-0e0bf45ddca6","Type":"ContainerStarted","Data":"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.335096 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d49b888c4-hqhhl" event={"ID":"d09d0f2b-e914-4662-9fce-0e0bf45ddca6","Type":"ContainerStarted","Data":"8654a31ab49c718872bdd0a20468b42eea05aff91fa57a7ef369d4ea7066da62"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.336005 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.342862 4762 generic.go:334] "Generic (PLEG): container finished" podID="e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" containerID="70cef9d6de50acbe20e8f771ae4b325213975dc24fb8027a301244fbf9ed2f37" exitCode=0 Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.342917 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzmhb" event={"ID":"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f","Type":"ContainerDied","Data":"70cef9d6de50acbe20e8f771ae4b325213975dc24fb8027a301244fbf9ed2f37"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.356825 4762 generic.go:334] "Generic (PLEG): container finished" podID="4a25763c-610d-42ef-ad0e-79c540318681" containerID="47bc565248bff0e09c955e1e7cb147379b58f8dc41cfb007b4daba23743668ba" exitCode=0 Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.356980 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-24e5-account-create-update-25drx" event={"ID":"4a25763c-610d-42ef-ad0e-79c540318681","Type":"ContainerDied","Data":"47bc565248bff0e09c955e1e7cb147379b58f8dc41cfb007b4daba23743668ba"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.368143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f8d966d9-594sf" event={"ID":"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d","Type":"ContainerStarted","Data":"062d2df46c9ffd0b2cd2d8783e06f398c24846dd1b480579f9f23aa6ad8d6c14"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.397368 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" event={"ID":"b48c29ff-2a5a-4583-86a2-5550b8653bed","Type":"ContainerStarted","Data":"5ecd17bbd9698099f53ea0255abee29bd4801c10345f01325c97e7143c4c19ea"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.400239 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" event={"ID":"b48c29ff-2a5a-4583-86a2-5550b8653bed","Type":"ContainerStarted","Data":"cd63eed7a3db9c3a3dacc88b20ba2e35c0a1f5cdc66b287b8174ac96fe467d66"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.402302 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66558fbb95-w94pj" event={"ID":"0c28709c-824a-4d55-9741-7d37406ae689","Type":"ContainerStarted","Data":"b9f80df0b0be50276cbc0674ab7163bdb45183dac6ec11e1c6476cf5bca65253"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.435707 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f98fcf9-zd48x" event={"ID":"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae","Type":"ContainerStarted","Data":"e9362912d8a8f7c1a000784471b89f59f93b374c7d5d2fa58fd31941c417103c"} Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.615964 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-smtf6" podStartSLOduration=7.615945439 podStartE2EDuration="7.615945439s" podCreationTimestamp="2026-03-08 00:46:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:19.572091413 +0000 UTC m=+1401.046235757" watchObservedRunningTime="2026-03-08 00:46:19.615945439 +0000 UTC m=+1401.090089773" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.625255 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" podStartSLOduration=6.625238792 podStartE2EDuration="6.625238792s" podCreationTimestamp="2026-03-08 00:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:19.605313865 +0000 UTC m=+1401.079458209" watchObservedRunningTime="2026-03-08 00:46:19.625238792 +0000 UTC m=+1401.099383136" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.717512 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-8f67-account-create-update-k6g6t" podStartSLOduration=6.717492944 podStartE2EDuration="6.717492944s" podCreationTimestamp="2026-03-08 00:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:19.675511964 +0000 UTC m=+1401.149656308" watchObservedRunningTime="2026-03-08 00:46:19.717492944 +0000 UTC m=+1401.191637288" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.746100 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5d49b888c4-hqhhl" podStartSLOduration=12.746081844999999 podStartE2EDuration="12.746081845s" podCreationTimestamp="2026-03-08 00:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:19.714803082 +0000 UTC m=+1401.188947426" watchObservedRunningTime="2026-03-08 00:46:19.746081845 +0000 UTC m=+1401.220226189" Mar 08 00:46:19 crc kubenswrapper[4762]: I0308 00:46:19.888942 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.032935 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjhcv\" (UniqueName: \"kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv\") pod \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.033251 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts\") pod \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\" (UID: \"a9551f6c-a71d-44b6-adb7-fe69e9c4f259\") " Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.034806 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9551f6c-a71d-44b6-adb7-fe69e9c4f259" (UID: "a9551f6c-a71d-44b6-adb7-fe69e9c4f259"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.043017 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv" (OuterVolumeSpecName: "kube-api-access-pjhcv") pod "a9551f6c-a71d-44b6-adb7-fe69e9c4f259" (UID: "a9551f6c-a71d-44b6-adb7-fe69e9c4f259"). InnerVolumeSpecName "kube-api-access-pjhcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.135711 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjhcv\" (UniqueName: \"kubernetes.io/projected/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-kube-api-access-pjhcv\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.135772 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9551f6c-a71d-44b6-adb7-fe69e9c4f259-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.448537 4762 generic.go:334] "Generic (PLEG): container finished" podID="b48c29ff-2a5a-4583-86a2-5550b8653bed" containerID="5ecd17bbd9698099f53ea0255abee29bd4801c10345f01325c97e7143c4c19ea" exitCode=0 Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.448600 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" event={"ID":"b48c29ff-2a5a-4583-86a2-5550b8653bed","Type":"ContainerDied","Data":"5ecd17bbd9698099f53ea0255abee29bd4801c10345f01325c97e7143c4c19ea"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.454123 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerStarted","Data":"2575cf896a0d262bb45b1364ac918fb93a4ddbed8b61572e910b2d972310bbdf"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.456887 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" event={"ID":"9f0b97b7-6c28-4a8b-99d7-242dde839d36","Type":"ContainerStarted","Data":"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.457653 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.464244 4762 generic.go:334] "Generic (PLEG): container finished" podID="a7fac32d-8f26-459b-a67e-592f1e292d80" containerID="6a57f6a58b9b035001322a51bdc185f7977090611bec71a327e59149b9bef547" exitCode=0 Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.464388 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smtf6" event={"ID":"a7fac32d-8f26-459b-a67e-592f1e292d80","Type":"ContainerDied","Data":"6a57f6a58b9b035001322a51bdc185f7977090611bec71a327e59149b9bef547"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.466543 4762 generic.go:334] "Generic (PLEG): container finished" podID="bcd021de-3952-4553-ae54-5c244346412a" containerID="76c8a4319c56c8e9d3fd72fde6141133739516507d6c80a407b949f3d39a6d3b" exitCode=0 Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.466669 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f67-account-create-update-k6g6t" event={"ID":"bcd021de-3952-4553-ae54-5c244346412a","Type":"ContainerDied","Data":"76c8a4319c56c8e9d3fd72fde6141133739516507d6c80a407b949f3d39a6d3b"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.472706 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pd5j4" event={"ID":"a9551f6c-a71d-44b6-adb7-fe69e9c4f259","Type":"ContainerDied","Data":"663bf90ab8586ab8069779c9802314deb4e10543b912dbb563bc232405df9ddd"} Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.472965 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663bf90ab8586ab8069779c9802314deb4e10543b912dbb563bc232405df9ddd" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.473395 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pd5j4" Mar 08 00:46:20 crc kubenswrapper[4762]: I0308 00:46:20.494272 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" podStartSLOduration=13.494255004 podStartE2EDuration="13.494255004s" podCreationTimestamp="2026-03-08 00:46:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:20.484438445 +0000 UTC m=+1401.958582789" watchObservedRunningTime="2026-03-08 00:46:20.494255004 +0000 UTC m=+1401.968399338" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.225705 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.371172 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts\") pod \"4a25763c-610d-42ef-ad0e-79c540318681\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.371619 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a25763c-610d-42ef-ad0e-79c540318681" (UID: "4a25763c-610d-42ef-ad0e-79c540318681"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.371770 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsd6s\" (UniqueName: \"kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s\") pod \"4a25763c-610d-42ef-ad0e-79c540318681\" (UID: \"4a25763c-610d-42ef-ad0e-79c540318681\") " Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.373341 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a25763c-610d-42ef-ad0e-79c540318681-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.391713 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s" (OuterVolumeSpecName: "kube-api-access-wsd6s") pod "4a25763c-610d-42ef-ad0e-79c540318681" (UID: "4a25763c-610d-42ef-ad0e-79c540318681"). InnerVolumeSpecName "kube-api-access-wsd6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.474729 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsd6s\" (UniqueName: \"kubernetes.io/projected/4a25763c-610d-42ef-ad0e-79c540318681-kube-api-access-wsd6s\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.485868 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-24e5-account-create-update-25drx" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.487259 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-24e5-account-create-update-25drx" event={"ID":"4a25763c-610d-42ef-ad0e-79c540318681","Type":"ContainerDied","Data":"d9f1400e612a9bb37b84b6f985010243bef77a12058c53618ff9adabc6290e82"} Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.487307 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f1400e612a9bb37b84b6f985010243bef77a12058c53618ff9adabc6290e82" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.929898 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.993391 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts\") pod \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " Mar 08 00:46:21 crc kubenswrapper[4762]: I0308 00:46:21.996018 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" (UID: "e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.081381 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.116528 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njz5g\" (UniqueName: \"kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g\") pod \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\" (UID: \"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.116770 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8q4d\" (UniqueName: \"kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d\") pod \"a7fac32d-8f26-459b-a67e-592f1e292d80\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.116863 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts\") pod \"a7fac32d-8f26-459b-a67e-592f1e292d80\" (UID: \"a7fac32d-8f26-459b-a67e-592f1e292d80\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.117314 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.119358 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a7fac32d-8f26-459b-a67e-592f1e292d80" (UID: "a7fac32d-8f26-459b-a67e-592f1e292d80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.126943 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d" (OuterVolumeSpecName: "kube-api-access-q8q4d") pod "a7fac32d-8f26-459b-a67e-592f1e292d80" (UID: "a7fac32d-8f26-459b-a67e-592f1e292d80"). InnerVolumeSpecName "kube-api-access-q8q4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.141731 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g" (OuterVolumeSpecName: "kube-api-access-njz5g") pod "e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" (UID: "e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f"). InnerVolumeSpecName "kube-api-access-njz5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.218938 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8q4d\" (UniqueName: \"kubernetes.io/projected/a7fac32d-8f26-459b-a67e-592f1e292d80-kube-api-access-q8q4d\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.218969 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a7fac32d-8f26-459b-a67e-592f1e292d80-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.218979 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njz5g\" (UniqueName: \"kubernetes.io/projected/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f-kube-api-access-njz5g\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.517290 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8f67-account-create-update-k6g6t" event={"ID":"bcd021de-3952-4553-ae54-5c244346412a","Type":"ContainerDied","Data":"d646231de5254fef53a8cbbbc6a91779af41899029b0778d607e764893cc32bb"} Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.517325 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d646231de5254fef53a8cbbbc6a91779af41899029b0778d607e764893cc32bb" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.517954 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.530966 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" event={"ID":"b48c29ff-2a5a-4583-86a2-5550b8653bed","Type":"ContainerDied","Data":"cd63eed7a3db9c3a3dacc88b20ba2e35c0a1f5cdc66b287b8174ac96fe467d66"} Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.531005 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd63eed7a3db9c3a3dacc88b20ba2e35c0a1f5cdc66b287b8174ac96fe467d66" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.535538 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-nzmhb" event={"ID":"e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f","Type":"ContainerDied","Data":"1dc9acc55c9a8928eb946923df70d186dd640b3b6105601cb8d31d26fb221bc3"} Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.535573 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dc9acc55c9a8928eb946923df70d186dd640b3b6105601cb8d31d26fb221bc3" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.535654 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-nzmhb" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.547204 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-smtf6" event={"ID":"a7fac32d-8f26-459b-a67e-592f1e292d80","Type":"ContainerDied","Data":"a1e32d0e030797abb8e1b055975abe5b55c371afd8835a4da9b879a49ebb1af3"} Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.547245 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e32d0e030797abb8e1b055975abe5b55c371afd8835a4da9b879a49ebb1af3" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.547320 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-smtf6" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.627110 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts\") pod \"bcd021de-3952-4553-ae54-5c244346412a\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.627160 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfr45\" (UniqueName: \"kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45\") pod \"bcd021de-3952-4553-ae54-5c244346412a\" (UID: \"bcd021de-3952-4553-ae54-5c244346412a\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.628707 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bcd021de-3952-4553-ae54-5c244346412a" (UID: "bcd021de-3952-4553-ae54-5c244346412a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.635960 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45" (OuterVolumeSpecName: "kube-api-access-bfr45") pod "bcd021de-3952-4553-ae54-5c244346412a" (UID: "bcd021de-3952-4553-ae54-5c244346412a"). InnerVolumeSpecName "kube-api-access-bfr45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.650579 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.729378 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bcd021de-3952-4553-ae54-5c244346412a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.729405 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfr45\" (UniqueName: \"kubernetes.io/projected/bcd021de-3952-4553-ae54-5c244346412a-kube-api-access-bfr45\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.830197 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zj6n\" (UniqueName: \"kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n\") pod \"b48c29ff-2a5a-4583-86a2-5550b8653bed\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.830444 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts\") pod \"b48c29ff-2a5a-4583-86a2-5550b8653bed\" (UID: \"b48c29ff-2a5a-4583-86a2-5550b8653bed\") " Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.832202 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b48c29ff-2a5a-4583-86a2-5550b8653bed" (UID: "b48c29ff-2a5a-4583-86a2-5550b8653bed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.837918 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n" (OuterVolumeSpecName: "kube-api-access-9zj6n") pod "b48c29ff-2a5a-4583-86a2-5550b8653bed" (UID: "b48c29ff-2a5a-4583-86a2-5550b8653bed"). InnerVolumeSpecName "kube-api-access-9zj6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.933671 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zj6n\" (UniqueName: \"kubernetes.io/projected/b48c29ff-2a5a-4583-86a2-5550b8653bed-kube-api-access-9zj6n\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:22 crc kubenswrapper[4762]: I0308 00:46:22.934093 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48c29ff-2a5a-4583-86a2-5550b8653bed-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.593660 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerStarted","Data":"5c9ea26e9dee95ab5e0f17fe733d2c3d9f7a0a025eaf9a7d724a22fe2c12e2e6"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.622379 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-576bfcb8cc-cr5zz" event={"ID":"49e2f802-447d-40ad-b7a5-b9530b0f9289","Type":"ContainerStarted","Data":"abc744f32d09c41f0a4ab31e6f0d84ee599b7d238b9a3ad67df75e39dfbfee5c"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.622513 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-576bfcb8cc-cr5zz" podUID="49e2f802-447d-40ad-b7a5-b9530b0f9289" containerName="heat-api" containerID="cri-o://abc744f32d09c41f0a4ab31e6f0d84ee599b7d238b9a3ad67df75e39dfbfee5c" gracePeriod=60 Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.622811 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.634938 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66558fbb95-w94pj" event={"ID":"0c28709c-824a-4d55-9741-7d37406ae689","Type":"ContainerStarted","Data":"8def5b71e03a34a0f8e1f0cc35f9f5fa3a17bfc26bffde27d3fa5c9f1cabcce7"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.635785 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.643919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f98fcf9-zd48x" event={"ID":"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae","Type":"ContainerStarted","Data":"618f3d2692343fc3d57031da6624abc6e396b4c62df06f4a04080be79617bd32"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.644776 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.674004 4762 generic.go:334] "Generic (PLEG): container finished" podID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerID="55c338495586ff77aeef4136e691b486713efdc83c5950bded175cbd4e69148e" exitCode=1 Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.674087 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f8d966d9-594sf" event={"ID":"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d","Type":"ContainerDied","Data":"55c338495586ff77aeef4136e691b486713efdc83c5950bded175cbd4e69148e"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.674698 4762 scope.go:117] "RemoveContainer" containerID="55c338495586ff77aeef4136e691b486713efdc83c5950bded175cbd4e69148e" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.677263 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-576bfcb8cc-cr5zz" podStartSLOduration=13.203846274 podStartE2EDuration="16.677242253s" podCreationTimestamp="2026-03-08 00:46:07 +0000 UTC" firstStartedPulling="2026-03-08 00:46:18.467081818 +0000 UTC m=+1399.941226162" lastFinishedPulling="2026-03-08 00:46:21.940477807 +0000 UTC m=+1403.414622141" observedRunningTime="2026-03-08 00:46:23.661213404 +0000 UTC m=+1405.135357748" watchObservedRunningTime="2026-03-08 00:46:23.677242253 +0000 UTC m=+1405.151386597" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.689542 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bccc47696-bz777" event={"ID":"e1f4db23-d669-492d-94a3-1d6538f754e8","Type":"ContainerStarted","Data":"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.689675 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-bccc47696-bz777" podUID="e1f4db23-d669-492d-94a3-1d6538f754e8" containerName="heat-cfnapi" containerID="cri-o://df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4" gracePeriod=60 Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.689749 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.698258 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-66558fbb95-w94pj" podStartSLOduration=3.498966399 podStartE2EDuration="6.698240763s" podCreationTimestamp="2026-03-08 00:46:17 +0000 UTC" firstStartedPulling="2026-03-08 00:46:18.726021429 +0000 UTC m=+1400.200165773" lastFinishedPulling="2026-03-08 00:46:21.925295793 +0000 UTC m=+1403.399440137" observedRunningTime="2026-03-08 00:46:23.691251589 +0000 UTC m=+1405.165395933" watchObservedRunningTime="2026-03-08 00:46:23.698240763 +0000 UTC m=+1405.172385107" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.699940 4762 generic.go:334] "Generic (PLEG): container finished" podID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerID="739c559efe8d0faeb43f6609e62fe01fec0739fcb8b827d1d59f0c39b38ef504" exitCode=1 Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.700032 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-98ce-account-create-update-bz5cs" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.701083 4762 scope.go:117] "RemoveContainer" containerID="739c559efe8d0faeb43f6609e62fe01fec0739fcb8b827d1d59f0c39b38ef504" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.701381 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68c5fb77db-mz745" event={"ID":"45e01f25-822e-454b-a7c8-e43ffd1feb56","Type":"ContainerDied","Data":"739c559efe8d0faeb43f6609e62fe01fec0739fcb8b827d1d59f0c39b38ef504"} Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.701426 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8f67-account-create-update-k6g6t" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.748558 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-557f98fcf9-zd48x" podStartSLOduration=3.331409082 podStartE2EDuration="6.748539776s" podCreationTimestamp="2026-03-08 00:46:17 +0000 UTC" firstStartedPulling="2026-03-08 00:46:18.511806011 +0000 UTC m=+1399.985950355" lastFinishedPulling="2026-03-08 00:46:21.928936705 +0000 UTC m=+1403.403081049" observedRunningTime="2026-03-08 00:46:23.726606308 +0000 UTC m=+1405.200750652" watchObservedRunningTime="2026-03-08 00:46:23.748539776 +0000 UTC m=+1405.222684120" Mar 08 00:46:23 crc kubenswrapper[4762]: I0308 00:46:23.848376 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-bccc47696-bz777" podStartSLOduration=13.290574277 podStartE2EDuration="16.848361497s" podCreationTimestamp="2026-03-08 00:46:07 +0000 UTC" firstStartedPulling="2026-03-08 00:46:18.368215695 +0000 UTC m=+1399.842360039" lastFinishedPulling="2026-03-08 00:46:21.926002915 +0000 UTC m=+1403.400147259" observedRunningTime="2026-03-08 00:46:23.784923325 +0000 UTC m=+1405.259067669" watchObservedRunningTime="2026-03-08 00:46:23.848361497 +0000 UTC m=+1405.322505841" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.247278 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.247629 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.709415 4762 generic.go:334] "Generic (PLEG): container finished" podID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerID="0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309" exitCode=1 Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.709739 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68c5fb77db-mz745" event={"ID":"45e01f25-822e-454b-a7c8-e43ffd1feb56","Type":"ContainerDied","Data":"0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309"} Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.709820 4762 scope.go:117] "RemoveContainer" containerID="739c559efe8d0faeb43f6609e62fe01fec0739fcb8b827d1d59f0c39b38ef504" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.710559 4762 scope.go:117] "RemoveContainer" containerID="0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309" Mar 08 00:46:24 crc kubenswrapper[4762]: E0308 00:46:24.711001 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68c5fb77db-mz745_openstack(45e01f25-822e-454b-a7c8-e43ffd1feb56)\"" pod="openstack/heat-cfnapi-68c5fb77db-mz745" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.713497 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerStarted","Data":"9926687cc35dd80b83f75e011a13842809e1c25b788322765b10ff3364de82a5"} Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.715194 4762 generic.go:334] "Generic (PLEG): container finished" podID="49e2f802-447d-40ad-b7a5-b9530b0f9289" containerID="abc744f32d09c41f0a4ab31e6f0d84ee599b7d238b9a3ad67df75e39dfbfee5c" exitCode=0 Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.715261 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-576bfcb8cc-cr5zz" event={"ID":"49e2f802-447d-40ad-b7a5-b9530b0f9289","Type":"ContainerDied","Data":"abc744f32d09c41f0a4ab31e6f0d84ee599b7d238b9a3ad67df75e39dfbfee5c"} Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.715278 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-576bfcb8cc-cr5zz" event={"ID":"49e2f802-447d-40ad-b7a5-b9530b0f9289","Type":"ContainerDied","Data":"6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f"} Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.715288 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6409d7bf21e5c38163cf3bfaa8c593b5e40bd1100407751adeadd91c3d68058f" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.719655 4762 generic.go:334] "Generic (PLEG): container finished" podID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerID="b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052" exitCode=1 Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.719772 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f8d966d9-594sf" event={"ID":"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d","Type":"ContainerDied","Data":"b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052"} Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.720411 4762 scope.go:117] "RemoveContainer" containerID="b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052" Mar 08 00:46:24 crc kubenswrapper[4762]: E0308 00:46:24.720661 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f8d966d9-594sf_openstack(9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d)\"" pod="openstack/heat-api-57f8d966d9-594sf" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.808012 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.818366 4762 scope.go:117] "RemoveContainer" containerID="55c338495586ff77aeef4136e691b486713efdc83c5950bded175cbd4e69148e" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.895820 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle\") pod \"49e2f802-447d-40ad-b7a5-b9530b0f9289\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.895999 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjww9\" (UniqueName: \"kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9\") pod \"49e2f802-447d-40ad-b7a5-b9530b0f9289\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.896200 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom\") pod \"49e2f802-447d-40ad-b7a5-b9530b0f9289\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.896239 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data\") pod \"49e2f802-447d-40ad-b7a5-b9530b0f9289\" (UID: \"49e2f802-447d-40ad-b7a5-b9530b0f9289\") " Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.902449 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9" (OuterVolumeSpecName: "kube-api-access-rjww9") pod "49e2f802-447d-40ad-b7a5-b9530b0f9289" (UID: "49e2f802-447d-40ad-b7a5-b9530b0f9289"). InnerVolumeSpecName "kube-api-access-rjww9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.902954 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "49e2f802-447d-40ad-b7a5-b9530b0f9289" (UID: "49e2f802-447d-40ad-b7a5-b9530b0f9289"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.934022 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49e2f802-447d-40ad-b7a5-b9530b0f9289" (UID: "49e2f802-447d-40ad-b7a5-b9530b0f9289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:24 crc kubenswrapper[4762]: I0308 00:46:24.961033 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data" (OuterVolumeSpecName: "config-data") pod "49e2f802-447d-40ad-b7a5-b9530b0f9289" (UID: "49e2f802-447d-40ad-b7a5-b9530b0f9289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.000358 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.000392 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.000403 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e2f802-447d-40ad-b7a5-b9530b0f9289-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.000415 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjww9\" (UniqueName: \"kubernetes.io/projected/49e2f802-447d-40ad-b7a5-b9530b0f9289-kube-api-access-rjww9\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.200829 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.200899 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.211854 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.212045 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.325443 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.329927 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gktkm" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" probeResult="failure" output=< Mar 08 00:46:25 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:46:25 crc kubenswrapper[4762]: > Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.350362 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5b78746fdd-smtch" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.423486 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.423813 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d6c959c44-lwsnn" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-log" containerID="cri-o://d0a8be3c5e6f6ce1dbca93979bad329fc90663c6ef835c655c3bed3d3e5fdd66" gracePeriod=30 Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.424248 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d6c959c44-lwsnn" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-api" containerID="cri-o://994022efbd7c851568489198582fe3d0ec406eb6bf4eb6f44ab48bae905fbe9e" gracePeriod=30 Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.742949 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.770192 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1f4db23-d669-492d-94a3-1d6538f754e8" containerID="df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4" exitCode=0 Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.770272 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bccc47696-bz777" event={"ID":"e1f4db23-d669-492d-94a3-1d6538f754e8","Type":"ContainerDied","Data":"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4"} Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.770299 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-bccc47696-bz777" event={"ID":"e1f4db23-d669-492d-94a3-1d6538f754e8","Type":"ContainerDied","Data":"62bf3208cb5bda315d5888d48322748f422c826d40b11854b829f0a14668a992"} Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.770316 4762 scope.go:117] "RemoveContainer" containerID="df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.770410 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-bccc47696-bz777" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.782970 4762 scope.go:117] "RemoveContainer" containerID="0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309" Mar 08 00:46:25 crc kubenswrapper[4762]: E0308 00:46:25.783180 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68c5fb77db-mz745_openstack(45e01f25-822e-454b-a7c8-e43ffd1feb56)\"" pod="openstack/heat-cfnapi-68c5fb77db-mz745" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.795850 4762 generic.go:334] "Generic (PLEG): container finished" podID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerID="d0a8be3c5e6f6ce1dbca93979bad329fc90663c6ef835c655c3bed3d3e5fdd66" exitCode=143 Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.795901 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerDied","Data":"d0a8be3c5e6f6ce1dbca93979bad329fc90663c6ef835c655c3bed3d3e5fdd66"} Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.800969 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-576bfcb8cc-cr5zz" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.802239 4762 scope.go:117] "RemoveContainer" containerID="b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052" Mar 08 00:46:25 crc kubenswrapper[4762]: E0308 00:46:25.807242 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f8d966d9-594sf_openstack(9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d)\"" pod="openstack/heat-api-57f8d966d9-594sf" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.814436 4762 scope.go:117] "RemoveContainer" containerID="df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4" Mar 08 00:46:25 crc kubenswrapper[4762]: E0308 00:46:25.819161 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4\": container with ID starting with df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4 not found: ID does not exist" containerID="df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.819206 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4"} err="failed to get container status \"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4\": rpc error: code = NotFound desc = could not find container \"df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4\": container with ID starting with df0be4ce24b8003828d55afd4ffe1dfa0d7187536648576d787b25bf496927b4 not found: ID does not exist" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.851778 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom\") pod \"e1f4db23-d669-492d-94a3-1d6538f754e8\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.851972 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle\") pod \"e1f4db23-d669-492d-94a3-1d6538f754e8\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.852156 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data\") pod \"e1f4db23-d669-492d-94a3-1d6538f754e8\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.852290 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llm44\" (UniqueName: \"kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44\") pod \"e1f4db23-d669-492d-94a3-1d6538f754e8\" (UID: \"e1f4db23-d669-492d-94a3-1d6538f754e8\") " Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.872568 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44" (OuterVolumeSpecName: "kube-api-access-llm44") pod "e1f4db23-d669-492d-94a3-1d6538f754e8" (UID: "e1f4db23-d669-492d-94a3-1d6538f754e8"). InnerVolumeSpecName "kube-api-access-llm44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.883932 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.895612 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e1f4db23-d669-492d-94a3-1d6538f754e8" (UID: "e1f4db23-d669-492d-94a3-1d6538f754e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.926348 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-576bfcb8cc-cr5zz"] Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.944391 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data" (OuterVolumeSpecName: "config-data") pod "e1f4db23-d669-492d-94a3-1d6538f754e8" (UID: "e1f4db23-d669-492d-94a3-1d6538f754e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.945008 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1f4db23-d669-492d-94a3-1d6538f754e8" (UID: "e1f4db23-d669-492d-94a3-1d6538f754e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.964238 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.964274 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.964287 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llm44\" (UniqueName: \"kubernetes.io/projected/e1f4db23-d669-492d-94a3-1d6538f754e8-kube-api-access-llm44\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:25 crc kubenswrapper[4762]: I0308 00:46:25.964300 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e1f4db23-d669-492d-94a3-1d6538f754e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.108715 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.118934 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-bccc47696-bz777"] Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.814056 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerStarted","Data":"184eb6a2f9156881aa03d4d567d660a0dea62e95430af65bc5706ed21ef66662"} Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.814828 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.817191 4762 scope.go:117] "RemoveContainer" containerID="b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052" Mar 08 00:46:26 crc kubenswrapper[4762]: E0308 00:46:26.817689 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-57f8d966d9-594sf_openstack(9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d)\"" pod="openstack/heat-api-57f8d966d9-594sf" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.818307 4762 scope.go:117] "RemoveContainer" containerID="0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309" Mar 08 00:46:26 crc kubenswrapper[4762]: E0308 00:46:26.818484 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-68c5fb77db-mz745_openstack(45e01f25-822e-454b-a7c8-e43ffd1feb56)\"" pod="openstack/heat-cfnapi-68c5fb77db-mz745" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" Mar 08 00:46:26 crc kubenswrapper[4762]: I0308 00:46:26.839305 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.374464863 podStartE2EDuration="9.839286214s" podCreationTimestamp="2026-03-08 00:46:17 +0000 UTC" firstStartedPulling="2026-03-08 00:46:19.021931096 +0000 UTC m=+1400.496075440" lastFinishedPulling="2026-03-08 00:46:25.486752447 +0000 UTC m=+1406.960896791" observedRunningTime="2026-03-08 00:46:26.832723914 +0000 UTC m=+1408.306868258" watchObservedRunningTime="2026-03-08 00:46:26.839286214 +0000 UTC m=+1408.313430558" Mar 08 00:46:27 crc kubenswrapper[4762]: I0308 00:46:27.291163 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e2f802-447d-40ad-b7a5-b9530b0f9289" path="/var/lib/kubelet/pods/49e2f802-447d-40ad-b7a5-b9530b0f9289/volumes" Mar 08 00:46:27 crc kubenswrapper[4762]: I0308 00:46:27.291681 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f4db23-d669-492d-94a3-1d6538f754e8" path="/var/lib/kubelet/pods/e1f4db23-d669-492d-94a3-1d6538f754e8/volumes" Mar 08 00:46:27 crc kubenswrapper[4762]: I0308 00:46:27.571912 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:46:27 crc kubenswrapper[4762]: I0308 00:46:27.636176 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:46:27 crc kubenswrapper[4762]: I0308 00:46:27.636392 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="dnsmasq-dns" containerID="cri-o://047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8" gracePeriod=10 Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.127501 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zcqpb"] Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128128 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f4db23-d669-492d-94a3-1d6538f754e8" containerName="heat-cfnapi" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128141 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f4db23-d669-492d-94a3-1d6538f754e8" containerName="heat-cfnapi" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128149 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a25763c-610d-42ef-ad0e-79c540318681" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128155 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a25763c-610d-42ef-ad0e-79c540318681" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128168 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fac32d-8f26-459b-a67e-592f1e292d80" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128174 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fac32d-8f26-459b-a67e-592f1e292d80" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128195 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9551f6c-a71d-44b6-adb7-fe69e9c4f259" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128200 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9551f6c-a71d-44b6-adb7-fe69e9c4f259" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128212 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e2f802-447d-40ad-b7a5-b9530b0f9289" containerName="heat-api" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128217 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e2f802-447d-40ad-b7a5-b9530b0f9289" containerName="heat-api" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128233 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcd021de-3952-4553-ae54-5c244346412a" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128239 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcd021de-3952-4553-ae54-5c244346412a" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128258 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48c29ff-2a5a-4583-86a2-5550b8653bed" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128265 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48c29ff-2a5a-4583-86a2-5550b8653bed" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: E0308 00:46:28.128276 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128283 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128475 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e2f802-447d-40ad-b7a5-b9530b0f9289" containerName="heat-api" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128490 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcd021de-3952-4553-ae54-5c244346412a" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128506 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a25763c-610d-42ef-ad0e-79c540318681" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128516 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f4db23-d669-492d-94a3-1d6538f754e8" containerName="heat-cfnapi" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128528 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fac32d-8f26-459b-a67e-592f1e292d80" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128539 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128547 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48c29ff-2a5a-4583-86a2-5550b8653bed" containerName="mariadb-account-create-update" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.128557 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9551f6c-a71d-44b6-adb7-fe69e9c4f259" containerName="mariadb-database-create" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.129283 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.131697 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p872h" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.131731 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.135335 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.146569 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zcqpb"] Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.273183 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.273351 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qzn7\" (UniqueName: \"kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.273443 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.273547 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.374720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.374869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.374901 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.374930 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qzn7\" (UniqueName: \"kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.381784 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.384415 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.388069 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.394276 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qzn7\" (UniqueName: \"kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7\") pod \"nova-cell0-conductor-db-sync-zcqpb\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.446340 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.546432 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679529 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v96br\" (UniqueName: \"kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679590 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679658 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679734 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679783 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.679858 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb\") pod \"d6a1f41f-1dbf-4da0-b725-eb520868478f\" (UID: \"d6a1f41f-1dbf-4da0-b725-eb520868478f\") " Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.698650 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br" (OuterVolumeSpecName: "kube-api-access-v96br") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "kube-api-access-v96br". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.756123 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.782972 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v96br\" (UniqueName: \"kubernetes.io/projected/d6a1f41f-1dbf-4da0-b725-eb520868478f-kube-api-access-v96br\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.783002 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.793423 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.814217 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.821713 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config" (OuterVolumeSpecName: "config") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.825272 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6a1f41f-1dbf-4da0-b725-eb520868478f" (UID: "d6a1f41f-1dbf-4da0-b725-eb520868478f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.886523 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.886560 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.886574 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.886585 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6a1f41f-1dbf-4da0-b725-eb520868478f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.906064 4762 generic.go:334] "Generic (PLEG): container finished" podID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerID="994022efbd7c851568489198582fe3d0ec406eb6bf4eb6f44ab48bae905fbe9e" exitCode=0 Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.906294 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerDied","Data":"994022efbd7c851568489198582fe3d0ec406eb6bf4eb6f44ab48bae905fbe9e"} Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.915372 4762 generic.go:334] "Generic (PLEG): container finished" podID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerID="047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8" exitCode=0 Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.915602 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.915795 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" event={"ID":"d6a1f41f-1dbf-4da0-b725-eb520868478f","Type":"ContainerDied","Data":"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8"} Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.915838 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-fvkkr" event={"ID":"d6a1f41f-1dbf-4da0-b725-eb520868478f","Type":"ContainerDied","Data":"1987212fbd008032d7592f08c3f8c83c2dcf3d0b4d081eadf479a94215ea2bb7"} Mar 08 00:46:28 crc kubenswrapper[4762]: I0308 00:46:28.915878 4762 scope.go:117] "RemoveContainer" containerID="047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.043208 4762 scope.go:117] "RemoveContainer" containerID="2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.112930 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.211838 4762 scope.go:117] "RemoveContainer" containerID="047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8" Mar 08 00:46:29 crc kubenswrapper[4762]: E0308 00:46:29.217916 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8\": container with ID starting with 047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8 not found: ID does not exist" containerID="047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.217961 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8"} err="failed to get container status \"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8\": rpc error: code = NotFound desc = could not find container \"047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8\": container with ID starting with 047c9cf2a60c85dda0798e4e1f1327bfb336f2ad1c6778e723eba8a620eaafc8 not found: ID does not exist" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.218009 4762 scope.go:117] "RemoveContainer" containerID="2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.218125 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-fvkkr"] Mar 08 00:46:29 crc kubenswrapper[4762]: E0308 00:46:29.221573 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e\": container with ID starting with 2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e not found: ID does not exist" containerID="2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.221625 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e"} err="failed to get container status \"2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e\": rpc error: code = NotFound desc = could not find container \"2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e\": container with ID starting with 2aea23080173209c1153febd2875583c8c1e75c511abcae18047f7c7426dcf4e not found: ID does not exist" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.244706 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zcqpb"] Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.282283 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" path="/var/lib/kubelet/pods/d6a1f41f-1dbf-4da0-b725-eb520868478f/volumes" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.450396 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525089 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs6t5\" (UniqueName: \"kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525183 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525259 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525343 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525394 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525412 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.525450 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data\") pod \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\" (UID: \"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6\") " Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.526120 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs" (OuterVolumeSpecName: "logs") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.531580 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts" (OuterVolumeSpecName: "scripts") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.533275 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5" (OuterVolumeSpecName: "kube-api-access-bs6t5") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "kube-api-access-bs6t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.587004 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data" (OuterVolumeSpecName: "config-data") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.613920 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.628065 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs6t5\" (UniqueName: \"kubernetes.io/projected/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-kube-api-access-bs6t5\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.628151 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.628162 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.628173 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.628181 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.642508 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.664145 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" (UID: "aefc1b1f-8723-437e-94a6-bc60ccc2f6b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.729907 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.729936 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.930890 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d6c959c44-lwsnn" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.930956 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d6c959c44-lwsnn" event={"ID":"aefc1b1f-8723-437e-94a6-bc60ccc2f6b6","Type":"ContainerDied","Data":"d768636b3493fec6c7a2554942e51026a41ebd2792bf60df48962385cf616071"} Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.930999 4762 scope.go:117] "RemoveContainer" containerID="994022efbd7c851568489198582fe3d0ec406eb6bf4eb6f44ab48bae905fbe9e" Mar 08 00:46:29 crc kubenswrapper[4762]: I0308 00:46:29.966874 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" event={"ID":"0922e07c-7b7b-4e78-98f0-19238b92ef5c","Type":"ContainerStarted","Data":"825fa56bda363050bb41966e3ad8ac5048a33e25f0fb57cd3583c4b174d541e0"} Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.055241 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.068872 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6d6c959c44-lwsnn"] Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.076839 4762 scope.go:117] "RemoveContainer" containerID="d0a8be3c5e6f6ce1dbca93979bad329fc90663c6ef835c655c3bed3d3e5fdd66" Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.518207 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.572871 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.980831 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-68c5fb77db-mz745" event={"ID":"45e01f25-822e-454b-a7c8-e43ffd1feb56","Type":"ContainerDied","Data":"9394060e21f35ecf800ac5c7003ac9ef1cbb29fee8daf0b8e40dc75be32537cf"} Mar 08 00:46:30 crc kubenswrapper[4762]: I0308 00:46:30.981100 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9394060e21f35ecf800ac5c7003ac9ef1cbb29fee8daf0b8e40dc75be32537cf" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.026379 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.060620 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle\") pod \"45e01f25-822e-454b-a7c8-e43ffd1feb56\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.060727 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjlmr\" (UniqueName: \"kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr\") pod \"45e01f25-822e-454b-a7c8-e43ffd1feb56\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.060845 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom\") pod \"45e01f25-822e-454b-a7c8-e43ffd1feb56\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.060959 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data\") pod \"45e01f25-822e-454b-a7c8-e43ffd1feb56\" (UID: \"45e01f25-822e-454b-a7c8-e43ffd1feb56\") " Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.067388 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr" (OuterVolumeSpecName: "kube-api-access-bjlmr") pod "45e01f25-822e-454b-a7c8-e43ffd1feb56" (UID: "45e01f25-822e-454b-a7c8-e43ffd1feb56"). InnerVolumeSpecName "kube-api-access-bjlmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.067865 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "45e01f25-822e-454b-a7c8-e43ffd1feb56" (UID: "45e01f25-822e-454b-a7c8-e43ffd1feb56"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.100750 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45e01f25-822e-454b-a7c8-e43ffd1feb56" (UID: "45e01f25-822e-454b-a7c8-e43ffd1feb56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.140814 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data" (OuterVolumeSpecName: "config-data") pod "45e01f25-822e-454b-a7c8-e43ffd1feb56" (UID: "45e01f25-822e-454b-a7c8-e43ffd1feb56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.163485 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.163521 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjlmr\" (UniqueName: \"kubernetes.io/projected/45e01f25-822e-454b-a7c8-e43ffd1feb56-kube-api-access-bjlmr\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.163533 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.163546 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45e01f25-822e-454b-a7c8-e43ffd1feb56-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.238867 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.239179 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-central-agent" containerID="cri-o://2575cf896a0d262bb45b1364ac918fb93a4ddbed8b61572e910b2d972310bbdf" gracePeriod=30 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.239326 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="proxy-httpd" containerID="cri-o://184eb6a2f9156881aa03d4d567d660a0dea62e95430af65bc5706ed21ef66662" gracePeriod=30 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.239395 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-notification-agent" containerID="cri-o://5c9ea26e9dee95ab5e0f17fe733d2c3d9f7a0a025eaf9a7d724a22fe2c12e2e6" gracePeriod=30 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.239538 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="sg-core" containerID="cri-o://9926687cc35dd80b83f75e011a13842809e1c25b788322765b10ff3364de82a5" gracePeriod=30 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.274504 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" path="/var/lib/kubelet/pods/aefc1b1f-8723-437e-94a6-bc60ccc2f6b6/volumes" Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997417 4762 generic.go:334] "Generic (PLEG): container finished" podID="703d06da-b287-4009-b69b-f24d3b583a7a" containerID="184eb6a2f9156881aa03d4d567d660a0dea62e95430af65bc5706ed21ef66662" exitCode=0 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997464 4762 generic.go:334] "Generic (PLEG): container finished" podID="703d06da-b287-4009-b69b-f24d3b583a7a" containerID="9926687cc35dd80b83f75e011a13842809e1c25b788322765b10ff3364de82a5" exitCode=2 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997475 4762 generic.go:334] "Generic (PLEG): container finished" podID="703d06da-b287-4009-b69b-f24d3b583a7a" containerID="5c9ea26e9dee95ab5e0f17fe733d2c3d9f7a0a025eaf9a7d724a22fe2c12e2e6" exitCode=0 Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997486 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerDied","Data":"184eb6a2f9156881aa03d4d567d660a0dea62e95430af65bc5706ed21ef66662"} Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997532 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerDied","Data":"9926687cc35dd80b83f75e011a13842809e1c25b788322765b10ff3364de82a5"} Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997543 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerDied","Data":"5c9ea26e9dee95ab5e0f17fe733d2c3d9f7a0a025eaf9a7d724a22fe2c12e2e6"} Mar 08 00:46:31 crc kubenswrapper[4762]: I0308 00:46:31.997534 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-68c5fb77db-mz745" Mar 08 00:46:32 crc kubenswrapper[4762]: I0308 00:46:32.049033 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:32 crc kubenswrapper[4762]: I0308 00:46:32.085731 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-68c5fb77db-mz745"] Mar 08 00:46:33 crc kubenswrapper[4762]: I0308 00:46:33.282245 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" path="/var/lib/kubelet/pods/45e01f25-822e-454b-a7c8-e43ffd1feb56/volumes" Mar 08 00:46:34 crc kubenswrapper[4762]: I0308 00:46:34.032369 4762 generic.go:334] "Generic (PLEG): container finished" podID="703d06da-b287-4009-b69b-f24d3b583a7a" containerID="2575cf896a0d262bb45b1364ac918fb93a4ddbed8b61572e910b2d972310bbdf" exitCode=0 Mar 08 00:46:34 crc kubenswrapper[4762]: I0308 00:46:34.032469 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerDied","Data":"2575cf896a0d262bb45b1364ac918fb93a4ddbed8b61572e910b2d972310bbdf"} Mar 08 00:46:34 crc kubenswrapper[4762]: I0308 00:46:34.197829 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:46:34 crc kubenswrapper[4762]: I0308 00:46:34.262865 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:35 crc kubenswrapper[4762]: I0308 00:46:35.179386 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:46:35 crc kubenswrapper[4762]: I0308 00:46:35.285649 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:35 crc kubenswrapper[4762]: I0308 00:46:35.285879 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5d49b888c4-hqhhl" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" containerID="cri-o://f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" gracePeriod=60 Mar 08 00:46:35 crc kubenswrapper[4762]: E0308 00:46:35.289249 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:35 crc kubenswrapper[4762]: E0308 00:46:35.291417 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:35 crc kubenswrapper[4762]: E0308 00:46:35.292814 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:35 crc kubenswrapper[4762]: E0308 00:46:35.292852 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5d49b888c4-hqhhl" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" Mar 08 00:46:35 crc kubenswrapper[4762]: I0308 00:46:35.316000 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gktkm" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" probeResult="failure" output=< Mar 08 00:46:35 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:46:35 crc kubenswrapper[4762]: > Mar 08 00:46:37 crc kubenswrapper[4762]: E0308 00:46:37.422447 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:37 crc kubenswrapper[4762]: E0308 00:46:37.424902 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:37 crc kubenswrapper[4762]: E0308 00:46:37.425818 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:46:37 crc kubenswrapper[4762]: E0308 00:46:37.425846 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5d49b888c4-hqhhl" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" Mar 08 00:46:38 crc kubenswrapper[4762]: I0308 00:46:38.999219 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.120081 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-57f8d966d9-594sf" event={"ID":"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d","Type":"ContainerDied","Data":"062d2df46c9ffd0b2cd2d8783e06f398c24846dd1b480579f9f23aa6ad8d6c14"} Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.120145 4762 scope.go:117] "RemoveContainer" containerID="b3803cb0ceb3610982d0e848ab8a47ea62cb852a62c5805a7b1bc8402a703052" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.120308 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-57f8d966d9-594sf" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.139958 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle\") pod \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.140050 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p242k\" (UniqueName: \"kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k\") pod \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.140092 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom\") pod \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.140132 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data\") pod \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\" (UID: \"9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.165988 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" (UID: "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.168300 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k" (OuterVolumeSpecName: "kube-api-access-p242k") pod "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" (UID: "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d"). InnerVolumeSpecName "kube-api-access-p242k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.192942 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" (UID: "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.243188 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data" (OuterVolumeSpecName: "config-data") pod "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" (UID: "9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.251685 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.251725 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p242k\" (UniqueName: \"kubernetes.io/projected/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-kube-api-access-p242k\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.251739 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.251753 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.475288 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.489517 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-57f8d966d9-594sf"] Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.644686 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764210 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764273 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764358 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq7kl\" (UniqueName: \"kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764388 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764543 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764567 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.764626 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle\") pod \"703d06da-b287-4009-b69b-f24d3b583a7a\" (UID: \"703d06da-b287-4009-b69b-f24d3b583a7a\") " Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.765217 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.765429 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.767819 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts" (OuterVolumeSpecName: "scripts") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.767925 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl" (OuterVolumeSpecName: "kube-api-access-pq7kl") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "kube-api-access-pq7kl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.793989 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.845949 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867662 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867696 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867713 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867724 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/703d06da-b287-4009-b69b-f24d3b583a7a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867736 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq7kl\" (UniqueName: \"kubernetes.io/projected/703d06da-b287-4009-b69b-f24d3b583a7a-kube-api-access-pq7kl\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.867751 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.913099 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data" (OuterVolumeSpecName: "config-data") pod "703d06da-b287-4009-b69b-f24d3b583a7a" (UID: "703d06da-b287-4009-b69b-f24d3b583a7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:39 crc kubenswrapper[4762]: I0308 00:46:39.970246 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/703d06da-b287-4009-b69b-f24d3b583a7a-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.142270 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"703d06da-b287-4009-b69b-f24d3b583a7a","Type":"ContainerDied","Data":"f0a5ec7433eee664f1883d2558ea60a31252006199f6ba10066c646e91bcd4e6"} Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.142328 4762 scope.go:117] "RemoveContainer" containerID="184eb6a2f9156881aa03d4d567d660a0dea62e95430af65bc5706ed21ef66662" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.142443 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.150265 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" event={"ID":"0922e07c-7b7b-4e78-98f0-19238b92ef5c","Type":"ContainerStarted","Data":"5958aab0df1a8a102d6300d1bded727c3f31794d16f1f8e15e787aa7ef539b60"} Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.171844 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" podStartSLOduration=1.534715967 podStartE2EDuration="12.171828737s" podCreationTimestamp="2026-03-08 00:46:28 +0000 UTC" firstStartedPulling="2026-03-08 00:46:29.073294525 +0000 UTC m=+1410.547438869" lastFinishedPulling="2026-03-08 00:46:39.710407305 +0000 UTC m=+1421.184551639" observedRunningTime="2026-03-08 00:46:40.168280599 +0000 UTC m=+1421.642424943" watchObservedRunningTime="2026-03-08 00:46:40.171828737 +0000 UTC m=+1421.645973081" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.176658 4762 scope.go:117] "RemoveContainer" containerID="9926687cc35dd80b83f75e011a13842809e1c25b788322765b10ff3364de82a5" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.195781 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.211962 4762 scope.go:117] "RemoveContainer" containerID="5c9ea26e9dee95ab5e0f17fe733d2c3d9f7a0a025eaf9a7d724a22fe2c12e2e6" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.215128 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.249945 4762 scope.go:117] "RemoveContainer" containerID="2575cf896a0d262bb45b1364ac918fb93a4ddbed8b61572e910b2d972310bbdf" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.252327 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.252945 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="sg-core" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.252963 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="sg-core" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.252975 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.252983 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-api" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253002 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-central-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253010 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-central-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253022 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-log" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253031 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-log" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253046 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="init" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253053 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="init" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253074 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253081 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253097 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253108 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253121 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253128 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253140 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-notification-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253147 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-notification-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253158 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="dnsmasq-dns" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253167 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="dnsmasq-dns" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253183 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="proxy-httpd" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253190 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="proxy-httpd" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253408 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-central-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253425 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="proxy-httpd" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253439 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253449 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="ceilometer-notification-agent" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253457 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-log" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253466 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253477 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aefc1b1f-8723-437e-94a6-bc60ccc2f6b6" containerName="placement-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253487 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253505 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a1f41f-1dbf-4da0-b725-eb520868478f" containerName="dnsmasq-dns" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253518 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" containerName="sg-core" Mar 08 00:46:40 crc kubenswrapper[4762]: E0308 00:46:40.253792 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.253803 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="45e01f25-822e-454b-a7c8-e43ffd1feb56" containerName="heat-cfnapi" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.254020 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" containerName="heat-api" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.256009 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.258687 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.259077 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.263809 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379107 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379149 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kdph\" (UniqueName: \"kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379187 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379201 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379244 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.379287 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481122 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481164 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481212 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481260 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481410 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481625 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481749 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.481977 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.482108 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kdph\" (UniqueName: \"kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.487046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.487883 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.493872 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.498278 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.506520 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kdph\" (UniqueName: \"kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph\") pod \"ceilometer-0\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " pod="openstack/ceilometer-0" Mar 08 00:46:40 crc kubenswrapper[4762]: I0308 00:46:40.572500 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:41 crc kubenswrapper[4762]: I0308 00:46:41.124213 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:41 crc kubenswrapper[4762]: I0308 00:46:41.160495 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerStarted","Data":"a1d9f9a79d7835dae1be6c0c52503ec73847ee142d8f118734dcbae1590eccaa"} Mar 08 00:46:41 crc kubenswrapper[4762]: I0308 00:46:41.276432 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="703d06da-b287-4009-b69b-f24d3b583a7a" path="/var/lib/kubelet/pods/703d06da-b287-4009-b69b-f24d3b583a7a/volumes" Mar 08 00:46:41 crc kubenswrapper[4762]: I0308 00:46:41.277475 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d" path="/var/lib/kubelet/pods/9b9c6edb-de6a-451b-9e78-8ba0fbba5c7d/volumes" Mar 08 00:46:42 crc kubenswrapper[4762]: I0308 00:46:42.173851 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerStarted","Data":"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9"} Mar 08 00:46:42 crc kubenswrapper[4762]: I0308 00:46:42.864775 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.039831 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom\") pod \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.039914 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h928l\" (UniqueName: \"kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l\") pod \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.040021 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data\") pod \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.040092 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle\") pod \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\" (UID: \"d09d0f2b-e914-4662-9fce-0e0bf45ddca6\") " Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.058084 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d09d0f2b-e914-4662-9fce-0e0bf45ddca6" (UID: "d09d0f2b-e914-4662-9fce-0e0bf45ddca6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.058300 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l" (OuterVolumeSpecName: "kube-api-access-h928l") pod "d09d0f2b-e914-4662-9fce-0e0bf45ddca6" (UID: "d09d0f2b-e914-4662-9fce-0e0bf45ddca6"). InnerVolumeSpecName "kube-api-access-h928l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.096101 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d09d0f2b-e914-4662-9fce-0e0bf45ddca6" (UID: "d09d0f2b-e914-4662-9fce-0e0bf45ddca6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.140311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data" (OuterVolumeSpecName: "config-data") pod "d09d0f2b-e914-4662-9fce-0e0bf45ddca6" (UID: "d09d0f2b-e914-4662-9fce-0e0bf45ddca6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.142040 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.142081 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.142094 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.142104 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h928l\" (UniqueName: \"kubernetes.io/projected/d09d0f2b-e914-4662-9fce-0e0bf45ddca6-kube-api-access-h928l\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.201612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerStarted","Data":"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82"} Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.203279 4762 generic.go:334] "Generic (PLEG): container finished" podID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" exitCode=0 Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.203310 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d49b888c4-hqhhl" event={"ID":"d09d0f2b-e914-4662-9fce-0e0bf45ddca6","Type":"ContainerDied","Data":"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9"} Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.203326 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5d49b888c4-hqhhl" event={"ID":"d09d0f2b-e914-4662-9fce-0e0bf45ddca6","Type":"ContainerDied","Data":"8654a31ab49c718872bdd0a20468b42eea05aff91fa57a7ef369d4ea7066da62"} Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.203341 4762 scope.go:117] "RemoveContainer" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.203461 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5d49b888c4-hqhhl" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.241899 4762 scope.go:117] "RemoveContainer" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" Mar 08 00:46:43 crc kubenswrapper[4762]: E0308 00:46:43.242377 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9\": container with ID starting with f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9 not found: ID does not exist" containerID="f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.242421 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9"} err="failed to get container status \"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9\": rpc error: code = NotFound desc = could not find container \"f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9\": container with ID starting with f275a72ca94b88b62543f2bd89d18cb02620a6ccf6f221b71240afe9c5c036c9 not found: ID does not exist" Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.252819 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:43 crc kubenswrapper[4762]: I0308 00:46:43.279112 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5d49b888c4-hqhhl"] Mar 08 00:46:44 crc kubenswrapper[4762]: I0308 00:46:44.216728 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerStarted","Data":"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d"} Mar 08 00:46:45 crc kubenswrapper[4762]: I0308 00:46:45.230186 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerStarted","Data":"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd"} Mar 08 00:46:45 crc kubenswrapper[4762]: I0308 00:46:45.231668 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:46:45 crc kubenswrapper[4762]: I0308 00:46:45.267607 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5585219750000001 podStartE2EDuration="5.267587987s" podCreationTimestamp="2026-03-08 00:46:40 +0000 UTC" firstStartedPulling="2026-03-08 00:46:41.119800625 +0000 UTC m=+1422.593944969" lastFinishedPulling="2026-03-08 00:46:44.828866647 +0000 UTC m=+1426.303010981" observedRunningTime="2026-03-08 00:46:45.255823809 +0000 UTC m=+1426.729968163" watchObservedRunningTime="2026-03-08 00:46:45.267587987 +0000 UTC m=+1426.741732331" Mar 08 00:46:45 crc kubenswrapper[4762]: I0308 00:46:45.290005 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" path="/var/lib/kubelet/pods/d09d0f2b-e914-4662-9fce-0e0bf45ddca6/volumes" Mar 08 00:46:45 crc kubenswrapper[4762]: I0308 00:46:45.295244 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gktkm" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" probeResult="failure" output=< Mar 08 00:46:45 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:46:45 crc kubenswrapper[4762]: > Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.000679 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.001359 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-central-agent" containerID="cri-o://6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9" gracePeriod=30 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.001424 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="sg-core" containerID="cri-o://c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d" gracePeriod=30 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.001448 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="proxy-httpd" containerID="cri-o://118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd" gracePeriod=30 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.001483 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-notification-agent" containerID="cri-o://39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82" gracePeriod=30 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.275576 4762 generic.go:334] "Generic (PLEG): container finished" podID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerID="118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd" exitCode=0 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.275857 4762 generic.go:334] "Generic (PLEG): container finished" podID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerID="c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d" exitCode=2 Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.275641 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerDied","Data":"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd"} Mar 08 00:46:50 crc kubenswrapper[4762]: I0308 00:46:50.276484 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerDied","Data":"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d"} Mar 08 00:46:51 crc kubenswrapper[4762]: I0308 00:46:51.288937 4762 generic.go:334] "Generic (PLEG): container finished" podID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerID="39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82" exitCode=0 Mar 08 00:46:51 crc kubenswrapper[4762]: I0308 00:46:51.289003 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerDied","Data":"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82"} Mar 08 00:46:52 crc kubenswrapper[4762]: I0308 00:46:52.299630 4762 generic.go:334] "Generic (PLEG): container finished" podID="0922e07c-7b7b-4e78-98f0-19238b92ef5c" containerID="5958aab0df1a8a102d6300d1bded727c3f31794d16f1f8e15e787aa7ef539b60" exitCode=0 Mar 08 00:46:52 crc kubenswrapper[4762]: I0308 00:46:52.299829 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" event={"ID":"0922e07c-7b7b-4e78-98f0-19238b92ef5c","Type":"ContainerDied","Data":"5958aab0df1a8a102d6300d1bded727c3f31794d16f1f8e15e787aa7ef539b60"} Mar 08 00:46:52 crc kubenswrapper[4762]: I0308 00:46:52.999444 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.141618 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142074 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kdph\" (UniqueName: \"kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142112 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142473 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142582 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142643 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142678 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle\") pod \"aaa813a9-4ef0-4357-9a97-92abee0907f7\" (UID: \"aaa813a9-4ef0-4357-9a97-92abee0907f7\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.142832 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.143070 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.143398 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.143413 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aaa813a9-4ef0-4357-9a97-92abee0907f7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.148545 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts" (OuterVolumeSpecName: "scripts") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.151883 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph" (OuterVolumeSpecName: "kube-api-access-4kdph") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "kube-api-access-4kdph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.189617 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.232805 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.245728 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.245792 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.245806 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kdph\" (UniqueName: \"kubernetes.io/projected/aaa813a9-4ef0-4357-9a97-92abee0907f7-kube-api-access-4kdph\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.245821 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.289796 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data" (OuterVolumeSpecName: "config-data") pod "aaa813a9-4ef0-4357-9a97-92abee0907f7" (UID: "aaa813a9-4ef0-4357-9a97-92abee0907f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.314437 4762 generic.go:334] "Generic (PLEG): container finished" podID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerID="6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9" exitCode=0 Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.314504 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.314538 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerDied","Data":"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9"} Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.314592 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aaa813a9-4ef0-4357-9a97-92abee0907f7","Type":"ContainerDied","Data":"a1d9f9a79d7835dae1be6c0c52503ec73847ee142d8f118734dcbae1590eccaa"} Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.314610 4762 scope.go:117] "RemoveContainer" containerID="118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.337394 4762 scope.go:117] "RemoveContainer" containerID="c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.353811 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aaa813a9-4ef0-4357-9a97-92abee0907f7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.363707 4762 scope.go:117] "RemoveContainer" containerID="39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.363891 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.375822 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388376 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.388832 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="sg-core" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388848 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="sg-core" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.388863 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-notification-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388870 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-notification-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.388888 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="proxy-httpd" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388894 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="proxy-httpd" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.388901 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-central-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388908 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-central-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.388924 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.388930 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.389134 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="proxy-httpd" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.389148 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-central-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.389155 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="sg-core" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.389171 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d09d0f2b-e914-4662-9fce-0e0bf45ddca6" containerName="heat-engine" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.389176 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" containerName="ceilometer-notification-agent" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.390860 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.397399 4762 scope.go:117] "RemoveContainer" containerID="6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.399822 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.400480 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.401270 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.462264 4762 scope.go:117] "RemoveContainer" containerID="118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.467222 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd\": container with ID starting with 118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd not found: ID does not exist" containerID="118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.467272 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd"} err="failed to get container status \"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd\": rpc error: code = NotFound desc = could not find container \"118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd\": container with ID starting with 118fdefc657755f927ca7b819e5ad342a9af82da76c5021058fd4de4c85f38bd not found: ID does not exist" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.467315 4762 scope.go:117] "RemoveContainer" containerID="c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.469960 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d\": container with ID starting with c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d not found: ID does not exist" containerID="c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.470007 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d"} err="failed to get container status \"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d\": rpc error: code = NotFound desc = could not find container \"c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d\": container with ID starting with c91a604ac6931bfb815f6b3761a15b85b53f0d9c4e8d00da10809a7fb7f2776d not found: ID does not exist" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.470035 4762 scope.go:117] "RemoveContainer" containerID="39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.470388 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82\": container with ID starting with 39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82 not found: ID does not exist" containerID="39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.470450 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82"} err="failed to get container status \"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82\": rpc error: code = NotFound desc = could not find container \"39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82\": container with ID starting with 39649f17b90f3d75cb4b74a37708d8148f4cdc33a6e7f15f031d66522d86bd82 not found: ID does not exist" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.470484 4762 scope.go:117] "RemoveContainer" containerID="6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9" Mar 08 00:46:53 crc kubenswrapper[4762]: E0308 00:46:53.470869 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9\": container with ID starting with 6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9 not found: ID does not exist" containerID="6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.470893 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9"} err="failed to get container status \"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9\": rpc error: code = NotFound desc = could not find container \"6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9\": container with ID starting with 6016de5ac9c21884db3c67e7407d6789db516a4e12160a983203b1899838a1c9 not found: ID does not exist" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559450 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkc4\" (UniqueName: \"kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559579 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559622 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559862 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559961 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.559987 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.560073 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.661778 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662085 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662108 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662148 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662194 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgkc4\" (UniqueName: \"kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662235 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662257 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662294 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.662946 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.668676 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.668729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.669551 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.670180 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.677321 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgkc4\" (UniqueName: \"kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4\") pod \"ceilometer-0\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.747096 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.764609 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.865169 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle\") pod \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.865248 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data\") pod \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.865443 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts\") pod \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.865469 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qzn7\" (UniqueName: \"kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7\") pod \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\" (UID: \"0922e07c-7b7b-4e78-98f0-19238b92ef5c\") " Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.868965 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts" (OuterVolumeSpecName: "scripts") pod "0922e07c-7b7b-4e78-98f0-19238b92ef5c" (UID: "0922e07c-7b7b-4e78-98f0-19238b92ef5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.870345 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7" (OuterVolumeSpecName: "kube-api-access-9qzn7") pod "0922e07c-7b7b-4e78-98f0-19238b92ef5c" (UID: "0922e07c-7b7b-4e78-98f0-19238b92ef5c"). InnerVolumeSpecName "kube-api-access-9qzn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.895005 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0922e07c-7b7b-4e78-98f0-19238b92ef5c" (UID: "0922e07c-7b7b-4e78-98f0-19238b92ef5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.897730 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data" (OuterVolumeSpecName: "config-data") pod "0922e07c-7b7b-4e78-98f0-19238b92ef5c" (UID: "0922e07c-7b7b-4e78-98f0-19238b92ef5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.967488 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.967748 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qzn7\" (UniqueName: \"kubernetes.io/projected/0922e07c-7b7b-4e78-98f0-19238b92ef5c-kube-api-access-9qzn7\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.967772 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:53 crc kubenswrapper[4762]: I0308 00:46:53.967780 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0922e07c-7b7b-4e78-98f0-19238b92ef5c-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.202619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.325281 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerStarted","Data":"7b82c86fc58000cec51ec39988e9e32d5a7e3b01b317368517a3ba851e64e923"} Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.326910 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" event={"ID":"0922e07c-7b7b-4e78-98f0-19238b92ef5c","Type":"ContainerDied","Data":"825fa56bda363050bb41966e3ad8ac5048a33e25f0fb57cd3583c4b174d541e0"} Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.326943 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="825fa56bda363050bb41966e3ad8ac5048a33e25f0fb57cd3583c4b174d541e0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.326998 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zcqpb" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.443380 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 08 00:46:54 crc kubenswrapper[4762]: E0308 00:46:54.443975 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0922e07c-7b7b-4e78-98f0-19238b92ef5c" containerName="nova-cell0-conductor-db-sync" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.444001 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0922e07c-7b7b-4e78-98f0-19238b92ef5c" containerName="nova-cell0-conductor-db-sync" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.444369 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0922e07c-7b7b-4e78-98f0-19238b92ef5c" containerName="nova-cell0-conductor-db-sync" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.445404 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.447270 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p872h" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.448033 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.455619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.580585 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scvzk\" (UniqueName: \"kubernetes.io/projected/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-kube-api-access-scvzk\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.580732 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.580790 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.682189 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scvzk\" (UniqueName: \"kubernetes.io/projected/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-kube-api-access-scvzk\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.682587 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.683443 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.688618 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.702736 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.703233 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scvzk\" (UniqueName: \"kubernetes.io/projected/1c1e4f25-7c6d-451d-bf78-2b1aa728b80e-kube-api-access-scvzk\") pod \"nova-cell0-conductor-0\" (UID: \"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e\") " pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:54 crc kubenswrapper[4762]: I0308 00:46:54.762519 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:55 crc kubenswrapper[4762]: I0308 00:46:55.205619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 08 00:46:55 crc kubenswrapper[4762]: W0308 00:46:55.207909 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c1e4f25_7c6d_451d_bf78_2b1aa728b80e.slice/crio-517cb1d569b83bec01a723a06b4f90b6ee27eba45d5a6251b9a41b43a5a8d6a5 WatchSource:0}: Error finding container 517cb1d569b83bec01a723a06b4f90b6ee27eba45d5a6251b9a41b43a5a8d6a5: Status 404 returned error can't find the container with id 517cb1d569b83bec01a723a06b4f90b6ee27eba45d5a6251b9a41b43a5a8d6a5 Mar 08 00:46:55 crc kubenswrapper[4762]: I0308 00:46:55.277856 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaa813a9-4ef0-4357-9a97-92abee0907f7" path="/var/lib/kubelet/pods/aaa813a9-4ef0-4357-9a97-92abee0907f7/volumes" Mar 08 00:46:55 crc kubenswrapper[4762]: I0308 00:46:55.322174 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gktkm" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" probeResult="failure" output=< Mar 08 00:46:55 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:46:55 crc kubenswrapper[4762]: > Mar 08 00:46:55 crc kubenswrapper[4762]: I0308 00:46:55.338540 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerStarted","Data":"938cd83e2f3cca41a4e683ea1a2845d1cc57cc150fb54e55a57301256d9f689e"} Mar 08 00:46:55 crc kubenswrapper[4762]: I0308 00:46:55.339447 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e","Type":"ContainerStarted","Data":"517cb1d569b83bec01a723a06b4f90b6ee27eba45d5a6251b9a41b43a5a8d6a5"} Mar 08 00:46:56 crc kubenswrapper[4762]: I0308 00:46:56.350015 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"1c1e4f25-7c6d-451d-bf78-2b1aa728b80e","Type":"ContainerStarted","Data":"585ad7a3f97c131228689c96348c63e0f60a426fdfcf344e4bb9f24c652005da"} Mar 08 00:46:56 crc kubenswrapper[4762]: I0308 00:46:56.350365 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 08 00:46:56 crc kubenswrapper[4762]: I0308 00:46:56.353456 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerStarted","Data":"6a9dd6ab509f3f4bde3eb36ef6e2f28f6f4ce2a4815d0a998075852f2dff8350"} Mar 08 00:46:56 crc kubenswrapper[4762]: I0308 00:46:56.370444 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.370427521 podStartE2EDuration="2.370427521s" podCreationTimestamp="2026-03-08 00:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:46:56.368069659 +0000 UTC m=+1437.842214003" watchObservedRunningTime="2026-03-08 00:46:56.370427521 +0000 UTC m=+1437.844571865" Mar 08 00:46:57 crc kubenswrapper[4762]: I0308 00:46:57.365550 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerStarted","Data":"600a7765342ccf6ac59df0fa076eb0dcf9d9b4237d81e2877519443c86f77c96"} Mar 08 00:46:59 crc kubenswrapper[4762]: I0308 00:46:59.394879 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerStarted","Data":"761cd5b0b5a78d32895d01e8e77529d3afb58d0e43081a4ca9aa00f001f61f24"} Mar 08 00:46:59 crc kubenswrapper[4762]: I0308 00:46:59.395317 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:46:59 crc kubenswrapper[4762]: I0308 00:46:59.442034 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.397060238 podStartE2EDuration="6.442015516s" podCreationTimestamp="2026-03-08 00:46:53 +0000 UTC" firstStartedPulling="2026-03-08 00:46:54.203700721 +0000 UTC m=+1435.677845065" lastFinishedPulling="2026-03-08 00:46:58.248655999 +0000 UTC m=+1439.722800343" observedRunningTime="2026-03-08 00:46:59.42114067 +0000 UTC m=+1440.895285044" watchObservedRunningTime="2026-03-08 00:46:59.442015516 +0000 UTC m=+1440.916159860" Mar 08 00:47:04 crc kubenswrapper[4762]: I0308 00:47:04.296119 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:47:04 crc kubenswrapper[4762]: I0308 00:47:04.364282 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:47:04 crc kubenswrapper[4762]: I0308 00:47:04.799178 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.169949 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.342168 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-6crtj"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.343842 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.348378 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.348616 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.356751 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6crtj"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.470333 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.470679 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.470752 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6z24\" (UniqueName: \"kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.470815 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.472360 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gktkm" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" containerID="cri-o://82544994f138d07437cafaf075d5e9cddd7693235be313541e40a1f367ce47dd" gracePeriod=2 Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.524417 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.526073 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.528465 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.565842 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.581869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6z24\" (UniqueName: \"kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.581911 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.581950 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.581970 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.582000 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bblj\" (UniqueName: \"kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.582055 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.582077 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.582133 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.599656 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.601045 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.607032 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.607142 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.607140 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.621916 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.630205 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6z24\" (UniqueName: \"kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.646941 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-6crtj\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.670090 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.671356 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.672095 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.672730 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.686506 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.687812 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.687928 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.687964 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.687997 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bblj\" (UniqueName: \"kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.691600 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.699703 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.701116 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.715845 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.717438 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.729307 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.731087 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bblj\" (UniqueName: \"kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj\") pod \"nova-api-0\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.776971 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789234 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789294 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789349 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789425 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rppjv\" (UniqueName: \"kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789443 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86nn5\" (UniqueName: \"kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.789462 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.836213 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.840930 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.851913 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.855197 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.891831 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2spx\" (UniqueName: \"kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.891878 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rppjv\" (UniqueName: \"kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.891896 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86nn5\" (UniqueName: \"kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.891917 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.891978 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.892001 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.892019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.892039 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.892090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.892131 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.899202 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.899731 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.904233 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.917789 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.920674 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rppjv\" (UniqueName: \"kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv\") pod \"nova-scheduler-0\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.920839 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86nn5\" (UniqueName: \"kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5\") pod \"nova-cell1-novncproxy-0\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.993711 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994009 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d284\" (UniqueName: \"kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994083 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994140 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994163 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994251 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994268 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994318 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994335 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994379 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2spx\" (UniqueName: \"kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:05 crc kubenswrapper[4762]: I0308 00:47:05.994932 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:05.998310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:05.998481 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.010406 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2spx\" (UniqueName: \"kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx\") pod \"nova-metadata-0\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " pod="openstack/nova-metadata-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100290 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100334 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d284\" (UniqueName: \"kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100387 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100492 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100510 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.100549 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.101200 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.101750 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.101956 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.102538 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.103010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.118476 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d284\" (UniqueName: \"kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284\") pod \"dnsmasq-dns-568d7fd7cf-2g5s9\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.146414 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.173736 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.241300 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.259533 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.535939 4762 generic.go:334] "Generic (PLEG): container finished" podID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerID="82544994f138d07437cafaf075d5e9cddd7693235be313541e40a1f367ce47dd" exitCode=0 Mar 08 00:47:06 crc kubenswrapper[4762]: I0308 00:47:06.536177 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerDied","Data":"82544994f138d07437cafaf075d5e9cddd7693235be313541e40a1f367ce47dd"} Mar 08 00:47:07 crc kubenswrapper[4762]: W0308 00:47:07.056953 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f3ba52d_9795_4455_bc4a_2469ed8b73df.slice/crio-9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab WatchSource:0}: Error finding container 9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab: Status 404 returned error can't find the container with id 9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.065044 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-6crtj"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.260677 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.392510 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.409110 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.431627 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.444347 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:07 crc kubenswrapper[4762]: W0308 00:47:07.449314 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887fc9b1_72ee_4f80_abe4_bec824e54090.slice/crio-130f7b7f2adeaceaac116274920721cf866d4791aafd418ee29539a1170d5e27 WatchSource:0}: Error finding container 130f7b7f2adeaceaac116274920721cf866d4791aafd418ee29539a1170d5e27: Status 404 returned error can't find the container with id 130f7b7f2adeaceaac116274920721cf866d4791aafd418ee29539a1170d5e27 Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.456355 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.457322 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gljhw\" (UniqueName: \"kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw\") pod \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.457432 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content\") pod \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.457512 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities\") pod \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\" (UID: \"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5\") " Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.460428 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities" (OuterVolumeSpecName: "utilities") pod "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" (UID: "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.491135 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw" (OuterVolumeSpecName: "kube-api-access-gljhw") pod "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" (UID: "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5"). InnerVolumeSpecName "kube-api-access-gljhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.555354 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4b7h8"] Mar 08 00:47:07 crc kubenswrapper[4762]: E0308 00:47:07.555836 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.555852 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" Mar 08 00:47:07 crc kubenswrapper[4762]: E0308 00:47:07.555880 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="extract-content" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.555887 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="extract-content" Mar 08 00:47:07 crc kubenswrapper[4762]: E0308 00:47:07.555897 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="extract-utilities" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.555904 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="extract-utilities" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.556095 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" containerName="registry-server" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.556800 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.558653 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0de75d3-2671-4d5d-9bd2-78212160505e","Type":"ContainerStarted","Data":"079ea5ae95a1d59d3c3a609281492d192ddbe56c44f4a62fc72339f46d12096a"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.558843 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.558991 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.559393 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.559417 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gljhw\" (UniqueName: \"kubernetes.io/projected/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-kube-api-access-gljhw\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.569149 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6crtj" event={"ID":"7f3ba52d-9795-4455-bc4a-2469ed8b73df","Type":"ContainerStarted","Data":"70b7ff0ea294294e5fa962593375dd3d535116656c1b45e5a77dd70b2ae9f521"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.569181 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6crtj" event={"ID":"7f3ba52d-9795-4455-bc4a-2469ed8b73df","Type":"ContainerStarted","Data":"9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.577299 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4b7h8"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.587987 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" event={"ID":"9299f483-ade6-448b-b5d8-2b39619abd6e","Type":"ContainerStarted","Data":"c6e9bd967dee9dd29ed65a23980cd56c2a24710da755523bcbafed24314fd66e"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.589159 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerStarted","Data":"3567e7e938cd12a349a834a386397d0975376a3c6a42a8e378a7dcf35afcc3e5"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.589911 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerStarted","Data":"130f7b7f2adeaceaac116274920721cf866d4791aafd418ee29539a1170d5e27"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.591633 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktkm" event={"ID":"cd8055fe-d17f-46f7-b7a1-62eaf9402dd5","Type":"ContainerDied","Data":"ec3376a6995b02bc9ef85e372f9fb869faba052e49c8dd264ce08b7ccee7a700"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.591659 4762 scope.go:117] "RemoveContainer" containerID="82544994f138d07437cafaf075d5e9cddd7693235be313541e40a1f367ce47dd" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.591829 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktkm" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.606068 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ede839bd-0b3d-40a3-993e-99df5675f617","Type":"ContainerStarted","Data":"03177a71684cd21c85897b2145bd46b89d4aa57acc2c1e2a61332b33f0f3c0ea"} Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.617683 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-6crtj" podStartSLOduration=2.6176653549999997 podStartE2EDuration="2.617665355s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:07.600077978 +0000 UTC m=+1449.074222322" watchObservedRunningTime="2026-03-08 00:47:07.617665355 +0000 UTC m=+1449.091809699" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.657814 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" (UID: "cd8055fe-d17f-46f7-b7a1-62eaf9402dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.661449 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.661699 4762 scope.go:117] "RemoveContainer" containerID="a0733e093453af09e168a34e892daa07ac2b89fa65c4d15e5ae1c23d6c020ac7" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.661724 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.661747 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltmq\" (UniqueName: \"kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.661792 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.662282 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.690098 4762 scope.go:117] "RemoveContainer" containerID="7e0a5928d27937eb1ed7b47c685528e5bf7ef4b0ffd9cb8473b82d761ad4dd82" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.764031 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.764072 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.764117 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltmq\" (UniqueName: \"kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.764168 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.768223 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.768809 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.773424 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.784958 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltmq\" (UniqueName: \"kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq\") pod \"nova-cell1-conductor-db-sync-4b7h8\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.922651 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.931857 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gktkm"] Mar 08 00:47:07 crc kubenswrapper[4762]: I0308 00:47:07.944729 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.515554 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.516155 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-central-agent" containerID="cri-o://938cd83e2f3cca41a4e683ea1a2845d1cc57cc150fb54e55a57301256d9f689e" gracePeriod=30 Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.516287 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="proxy-httpd" containerID="cri-o://761cd5b0b5a78d32895d01e8e77529d3afb58d0e43081a4ca9aa00f001f61f24" gracePeriod=30 Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.516322 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="sg-core" containerID="cri-o://600a7765342ccf6ac59df0fa076eb0dcf9d9b4237d81e2877519443c86f77c96" gracePeriod=30 Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.516358 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-notification-agent" containerID="cri-o://6a9dd6ab509f3f4bde3eb36ef6e2f28f6f4ce2a4815d0a998075852f2dff8350" gracePeriod=30 Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.538016 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.227:3000/\": EOF" Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.575056 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4b7h8"] Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.619557 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" event={"ID":"56650c9a-96a5-4911-8775-8f4c3013053f","Type":"ContainerStarted","Data":"598c1a8fb8e831ad6894ce2e221f7f80f2da0afc9899b68d2a402ab6d802a54f"} Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.627483 4762 generic.go:334] "Generic (PLEG): container finished" podID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerID="7984706ad04c1cf3c5a77d295994336905c5c86837bf5912b3a19772748ea60b" exitCode=0 Mar 08 00:47:08 crc kubenswrapper[4762]: I0308 00:47:08.628626 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" event={"ID":"9299f483-ade6-448b-b5d8-2b39619abd6e","Type":"ContainerDied","Data":"7984706ad04c1cf3c5a77d295994336905c5c86837bf5912b3a19772748ea60b"} Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.319898 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd8055fe-d17f-46f7-b7a1-62eaf9402dd5" path="/var/lib/kubelet/pods/cd8055fe-d17f-46f7-b7a1-62eaf9402dd5/volumes" Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.653498 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670747 4762 generic.go:334] "Generic (PLEG): container finished" podID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerID="761cd5b0b5a78d32895d01e8e77529d3afb58d0e43081a4ca9aa00f001f61f24" exitCode=0 Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670802 4762 generic.go:334] "Generic (PLEG): container finished" podID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerID="600a7765342ccf6ac59df0fa076eb0dcf9d9b4237d81e2877519443c86f77c96" exitCode=2 Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670811 4762 generic.go:334] "Generic (PLEG): container finished" podID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerID="6a9dd6ab509f3f4bde3eb36ef6e2f28f6f4ce2a4815d0a998075852f2dff8350" exitCode=0 Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670817 4762 generic.go:334] "Generic (PLEG): container finished" podID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerID="938cd83e2f3cca41a4e683ea1a2845d1cc57cc150fb54e55a57301256d9f689e" exitCode=0 Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670836 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerDied","Data":"761cd5b0b5a78d32895d01e8e77529d3afb58d0e43081a4ca9aa00f001f61f24"} Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670880 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerDied","Data":"600a7765342ccf6ac59df0fa076eb0dcf9d9b4237d81e2877519443c86f77c96"} Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670890 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerDied","Data":"6a9dd6ab509f3f4bde3eb36ef6e2f28f6f4ce2a4815d0a998075852f2dff8350"} Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.670899 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerDied","Data":"938cd83e2f3cca41a4e683ea1a2845d1cc57cc150fb54e55a57301256d9f689e"} Mar 08 00:47:09 crc kubenswrapper[4762]: I0308 00:47:09.680691 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:10 crc kubenswrapper[4762]: I0308 00:47:10.681921 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" event={"ID":"56650c9a-96a5-4911-8775-8f4c3013053f","Type":"ContainerStarted","Data":"0ad2e2fbc4ee99bb337b6dd9ee75ad05b741c7275b6219774e93d5804e269ede"} Mar 08 00:47:10 crc kubenswrapper[4762]: I0308 00:47:10.702047 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" podStartSLOduration=3.7020261789999998 podStartE2EDuration="3.702026179s" podCreationTimestamp="2026-03-08 00:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:10.698622696 +0000 UTC m=+1452.172767050" watchObservedRunningTime="2026-03-08 00:47:10.702026179 +0000 UTC m=+1452.176170543" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.544163 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621436 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621724 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621747 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621784 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621838 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgkc4\" (UniqueName: \"kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.621923 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.622529 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.622897 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd\") pod \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\" (UID: \"f67b828b-be4b-40d5-b1ac-e855fb50cb25\") " Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.623624 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.623978 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.671296 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4" (OuterVolumeSpecName: "kube-api-access-lgkc4") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "kube-api-access-lgkc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.671427 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts" (OuterVolumeSpecName: "scripts") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.716876 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.716957 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f67b828b-be4b-40d5-b1ac-e855fb50cb25","Type":"ContainerDied","Data":"7b82c86fc58000cec51ec39988e9e32d5a7e3b01b317368517a3ba851e64e923"} Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.716995 4762 scope.go:117] "RemoveContainer" containerID="761cd5b0b5a78d32895d01e8e77529d3afb58d0e43081a4ca9aa00f001f61f24" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.731122 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f67b828b-be4b-40d5-b1ac-e855fb50cb25-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.731152 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.731162 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgkc4\" (UniqueName: \"kubernetes.io/projected/f67b828b-be4b-40d5-b1ac-e855fb50cb25-kube-api-access-lgkc4\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.744911 4762 scope.go:117] "RemoveContainer" containerID="600a7765342ccf6ac59df0fa076eb0dcf9d9b4237d81e2877519443c86f77c96" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.765150 4762 scope.go:117] "RemoveContainer" containerID="6a9dd6ab509f3f4bde3eb36ef6e2f28f6f4ce2a4815d0a998075852f2dff8350" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.785586 4762 scope.go:117] "RemoveContainer" containerID="938cd83e2f3cca41a4e683ea1a2845d1cc57cc150fb54e55a57301256d9f689e" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.868822 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.930917 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.934593 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.934632 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:11 crc kubenswrapper[4762]: I0308 00:47:11.943378 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data" (OuterVolumeSpecName: "config-data") pod "f67b828b-be4b-40d5-b1ac-e855fb50cb25" (UID: "f67b828b-be4b-40d5-b1ac-e855fb50cb25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.035845 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f67b828b-be4b-40d5-b1ac-e855fb50cb25-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.053907 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.070329 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.083956 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:12 crc kubenswrapper[4762]: E0308 00:47:12.097680 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="proxy-httpd" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.097721 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="proxy-httpd" Mar 08 00:47:12 crc kubenswrapper[4762]: E0308 00:47:12.097733 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-notification-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.097739 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-notification-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: E0308 00:47:12.097801 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="sg-core" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.097809 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="sg-core" Mar 08 00:47:12 crc kubenswrapper[4762]: E0308 00:47:12.097827 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-central-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.097835 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-central-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.098906 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-central-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.098932 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="proxy-httpd" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.098952 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="sg-core" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.098971 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" containerName="ceilometer-notification-agent" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.125118 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.125859 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.127656 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.128073 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243075 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243133 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243165 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243427 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243488 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnmq\" (UniqueName: \"kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243555 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.243644 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346121 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346195 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mnmq\" (UniqueName: \"kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346257 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346316 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346429 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346475 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.346522 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.347822 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.349152 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.352888 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.356355 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.357403 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.357819 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.366073 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mnmq\" (UniqueName: \"kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq\") pod \"ceilometer-0\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.446996 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.730237 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0de75d3-2671-4d5d-9bd2-78212160505e","Type":"ContainerStarted","Data":"eeddcdda7926b57d8eb0a904f85ee9e4f4606f66e8673d7706a3ce4561e77b76"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.739228 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" event={"ID":"9299f483-ade6-448b-b5d8-2b39619abd6e","Type":"ContainerStarted","Data":"445e700e2b418a1af4b38af93a30bd9cd60ae56e121107141194ddc3208c7f21"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.739364 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.744459 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerStarted","Data":"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.744501 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerStarted","Data":"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.755244 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerStarted","Data":"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.755297 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerStarted","Data":"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.755436 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-log" containerID="cri-o://853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" gracePeriod=30 Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.755822 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-metadata" containerID="cri-o://04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" gracePeriod=30 Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.769294 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ede839bd-0b3d-40a3-993e-99df5675f617","Type":"ContainerStarted","Data":"0ade3523ef37f533ab708b0a2d54f5af109ed7d5e82ff7d4ba251836373561bf"} Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.769414 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="ede839bd-0b3d-40a3-993e-99df5675f617" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0ade3523ef37f533ab708b0a2d54f5af109ed7d5e82ff7d4ba251836373561bf" gracePeriod=30 Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.772139 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.903228102 podStartE2EDuration="7.772124234s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="2026-03-08 00:47:07.437853765 +0000 UTC m=+1448.911998099" lastFinishedPulling="2026-03-08 00:47:11.306749887 +0000 UTC m=+1452.780894231" observedRunningTime="2026-03-08 00:47:12.760587143 +0000 UTC m=+1454.234731497" watchObservedRunningTime="2026-03-08 00:47:12.772124234 +0000 UTC m=+1454.246268578" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.788865 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.937316371 podStartE2EDuration="7.788846984s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="2026-03-08 00:47:07.45342166 +0000 UTC m=+1448.927566004" lastFinishedPulling="2026-03-08 00:47:11.304952273 +0000 UTC m=+1452.779096617" observedRunningTime="2026-03-08 00:47:12.785650186 +0000 UTC m=+1454.259794520" watchObservedRunningTime="2026-03-08 00:47:12.788846984 +0000 UTC m=+1454.262991328" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.811254 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.87038012 podStartE2EDuration="7.811236456s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="2026-03-08 00:47:07.391003067 +0000 UTC m=+1448.865147411" lastFinishedPulling="2026-03-08 00:47:11.331859403 +0000 UTC m=+1452.806003747" observedRunningTime="2026-03-08 00:47:12.803041327 +0000 UTC m=+1454.277185691" watchObservedRunningTime="2026-03-08 00:47:12.811236456 +0000 UTC m=+1454.285380790" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.832498 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" podStartSLOduration=7.8324798829999995 podStartE2EDuration="7.832479883s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:12.824074637 +0000 UTC m=+1454.298218981" watchObservedRunningTime="2026-03-08 00:47:12.832479883 +0000 UTC m=+1454.306624217" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.845057 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.941638812 podStartE2EDuration="7.845035836s" podCreationTimestamp="2026-03-08 00:47:05 +0000 UTC" firstStartedPulling="2026-03-08 00:47:07.404815518 +0000 UTC m=+1448.878959852" lastFinishedPulling="2026-03-08 00:47:11.308212532 +0000 UTC m=+1452.782356876" observedRunningTime="2026-03-08 00:47:12.841150338 +0000 UTC m=+1454.315294682" watchObservedRunningTime="2026-03-08 00:47:12.845035836 +0000 UTC m=+1454.319180180" Mar 08 00:47:12 crc kubenswrapper[4762]: I0308 00:47:12.922960 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.093655 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-5xq4s"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.095117 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.127374 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5xq4s"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.168301 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pd5\" (UniqueName: \"kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.168357 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.187451 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-2ad8-account-create-update-b6dhz"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.188599 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.192494 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.246466 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2ad8-account-create-update-b6dhz"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.270109 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pd5\" (UniqueName: \"kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.270172 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsp4h\" (UniqueName: \"kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.270223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.270367 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.271379 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.308421 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pd5\" (UniqueName: \"kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5\") pod \"aodh-db-create-5xq4s\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.327305 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67b828b-be4b-40d5-b1ac-e855fb50cb25" path="/var/lib/kubelet/pods/f67b828b-be4b-40d5-b1ac-e855fb50cb25/volumes" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.372188 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.372362 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsp4h\" (UniqueName: \"kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.373614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.401147 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsp4h\" (UniqueName: \"kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h\") pod \"aodh-2ad8-account-create-update-b6dhz\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.422460 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.513299 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.525065 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.581001 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2spx\" (UniqueName: \"kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx\") pod \"887fc9b1-72ee-4f80-abe4-bec824e54090\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.581044 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data\") pod \"887fc9b1-72ee-4f80-abe4-bec824e54090\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.581063 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs\") pod \"887fc9b1-72ee-4f80-abe4-bec824e54090\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.581138 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle\") pod \"887fc9b1-72ee-4f80-abe4-bec824e54090\" (UID: \"887fc9b1-72ee-4f80-abe4-bec824e54090\") " Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.583401 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs" (OuterVolumeSpecName: "logs") pod "887fc9b1-72ee-4f80-abe4-bec824e54090" (UID: "887fc9b1-72ee-4f80-abe4-bec824e54090"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.586366 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx" (OuterVolumeSpecName: "kube-api-access-h2spx") pod "887fc9b1-72ee-4f80-abe4-bec824e54090" (UID: "887fc9b1-72ee-4f80-abe4-bec824e54090"). InnerVolumeSpecName "kube-api-access-h2spx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.614732 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data" (OuterVolumeSpecName: "config-data") pod "887fc9b1-72ee-4f80-abe4-bec824e54090" (UID: "887fc9b1-72ee-4f80-abe4-bec824e54090"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.622752 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "887fc9b1-72ee-4f80-abe4-bec824e54090" (UID: "887fc9b1-72ee-4f80-abe4-bec824e54090"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.684499 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2spx\" (UniqueName: \"kubernetes.io/projected/887fc9b1-72ee-4f80-abe4-bec824e54090-kube-api-access-h2spx\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.684525 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.684534 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/887fc9b1-72ee-4f80-abe4-bec824e54090-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.684543 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/887fc9b1-72ee-4f80-abe4-bec824e54090-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.797277 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerStarted","Data":"973fbd398fb56eb76a78c13f247426ba7cba02ba71382588679df59c2bfda3ea"} Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.797318 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerStarted","Data":"78bb2ffe38f0c448e0b083f4fc4242fc92e16b6efcb16a5c5607329bb46efc59"} Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.802204 4762 generic.go:334] "Generic (PLEG): container finished" podID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerID="04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" exitCode=0 Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.802238 4762 generic.go:334] "Generic (PLEG): container finished" podID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerID="853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" exitCode=143 Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.803288 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.805823 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerDied","Data":"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e"} Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.805866 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerDied","Data":"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec"} Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.805877 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"887fc9b1-72ee-4f80-abe4-bec824e54090","Type":"ContainerDied","Data":"130f7b7f2adeaceaac116274920721cf866d4791aafd418ee29539a1170d5e27"} Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.805892 4762 scope.go:117] "RemoveContainer" containerID="04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.833912 4762 scope.go:117] "RemoveContainer" containerID="853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.876871 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.887896 4762 scope.go:117] "RemoveContainer" containerID="04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" Mar 08 00:47:13 crc kubenswrapper[4762]: E0308 00:47:13.891853 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e\": container with ID starting with 04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e not found: ID does not exist" containerID="04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.891888 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e"} err="failed to get container status \"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e\": rpc error: code = NotFound desc = could not find container \"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e\": container with ID starting with 04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e not found: ID does not exist" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.891910 4762 scope.go:117] "RemoveContainer" containerID="853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.892724 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:13 crc kubenswrapper[4762]: E0308 00:47:13.893627 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec\": container with ID starting with 853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec not found: ID does not exist" containerID="853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.893724 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec"} err="failed to get container status \"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec\": rpc error: code = NotFound desc = could not find container \"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec\": container with ID starting with 853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec not found: ID does not exist" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.893834 4762 scope.go:117] "RemoveContainer" containerID="04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.897139 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e"} err="failed to get container status \"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e\": rpc error: code = NotFound desc = could not find container \"04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e\": container with ID starting with 04ecac830d8b04f80f0d389d10b62d192e0bd6e825875d3aa6f747ecb8c6006e not found: ID does not exist" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.897295 4762 scope.go:117] "RemoveContainer" containerID="853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.898231 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec"} err="failed to get container status \"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec\": rpc error: code = NotFound desc = could not find container \"853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec\": container with ID starting with 853d46abf8b805ee14a7c65f316ab926686c3f94758fd107e52777d2ed0d28ec not found: ID does not exist" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.903423 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:13 crc kubenswrapper[4762]: E0308 00:47:13.903922 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-log" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.903937 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-log" Mar 08 00:47:13 crc kubenswrapper[4762]: E0308 00:47:13.903958 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-metadata" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.903964 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-metadata" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.904165 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-log" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.904200 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" containerName="nova-metadata-metadata" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.905292 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.909127 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.909354 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.919513 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:13 crc kubenswrapper[4762]: I0308 00:47:13.942131 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-5xq4s"] Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.109889 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.110296 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gmp5\" (UniqueName: \"kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.110365 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.110400 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.110428 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.150656 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-2ad8-account-create-update-b6dhz"] Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.212471 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gmp5\" (UniqueName: \"kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.212580 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.212627 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.212662 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.212697 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.213235 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.218171 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.220740 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.223124 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.238617 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gmp5\" (UniqueName: \"kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5\") pod \"nova-metadata-0\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.245000 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.854184 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerStarted","Data":"a26bdf0a7ea35fb84ed504a559d33acfd3a4d73ae04389e6e1c581648ae1d2b8"} Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.857357 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ad8-account-create-update-b6dhz" event={"ID":"06e208b1-f109-4635-89ac-399a6421162f","Type":"ContainerStarted","Data":"2274c0ffd9e65b70c2f0092cfc22b965a1f34cd4637318158679b4a815311c81"} Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.859418 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5xq4s" event={"ID":"a237e39c-dfeb-490d-a675-88175cb7f0fb","Type":"ContainerStarted","Data":"d5bf44ad97bd61215d9f26ce45315f577fd75b93b1a9b9a3bb113b104a998eab"} Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.859441 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5xq4s" event={"ID":"a237e39c-dfeb-490d-a675-88175cb7f0fb","Type":"ContainerStarted","Data":"2ab0bbe236288b1693307b7daf8bcbf6fed8f41e75489106d15b5be7498acf4f"} Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.885720 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-5xq4s" podStartSLOduration=1.8857024949999999 podStartE2EDuration="1.885702495s" podCreationTimestamp="2026-03-08 00:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:14.877447773 +0000 UTC m=+1456.351592117" watchObservedRunningTime="2026-03-08 00:47:14.885702495 +0000 UTC m=+1456.359846829" Mar 08 00:47:14 crc kubenswrapper[4762]: I0308 00:47:14.935704 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.274848 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887fc9b1-72ee-4f80-abe4-bec824e54090" path="/var/lib/kubelet/pods/887fc9b1-72ee-4f80-abe4-bec824e54090/volumes" Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.852883 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.853085 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.878750 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerStarted","Data":"c3cafd82f77a0db7e0a3fe9fc65b9ef96d9d186b354d145198ab88a96bbe1ec7"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.884815 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerStarted","Data":"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.884855 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerStarted","Data":"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.884864 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerStarted","Data":"d0d1d6a79ae777300a700df0dcab9b6529a896d88d22cbdfb2a5da300fd6146b"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.887594 4762 generic.go:334] "Generic (PLEG): container finished" podID="7f3ba52d-9795-4455-bc4a-2469ed8b73df" containerID="70b7ff0ea294294e5fa962593375dd3d535116656c1b45e5a77dd70b2ae9f521" exitCode=0 Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.887635 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6crtj" event={"ID":"7f3ba52d-9795-4455-bc4a-2469ed8b73df","Type":"ContainerDied","Data":"70b7ff0ea294294e5fa962593375dd3d535116656c1b45e5a77dd70b2ae9f521"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.900547 4762 generic.go:334] "Generic (PLEG): container finished" podID="06e208b1-f109-4635-89ac-399a6421162f" containerID="eb5ae3d7ced2c09c8761cfb2a3024d0c4c65f6282876cf18973527c999b70a4a" exitCode=0 Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.900643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ad8-account-create-update-b6dhz" event={"ID":"06e208b1-f109-4635-89ac-399a6421162f","Type":"ContainerDied","Data":"eb5ae3d7ced2c09c8761cfb2a3024d0c4c65f6282876cf18973527c999b70a4a"} Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.904359 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.904347207 podStartE2EDuration="2.904347207s" podCreationTimestamp="2026-03-08 00:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:15.901948424 +0000 UTC m=+1457.376092768" watchObservedRunningTime="2026-03-08 00:47:15.904347207 +0000 UTC m=+1457.378491551" Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.911917 4762 generic.go:334] "Generic (PLEG): container finished" podID="a237e39c-dfeb-490d-a675-88175cb7f0fb" containerID="d5bf44ad97bd61215d9f26ce45315f577fd75b93b1a9b9a3bb113b104a998eab" exitCode=0 Mar 08 00:47:15 crc kubenswrapper[4762]: I0308 00:47:15.912945 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5xq4s" event={"ID":"a237e39c-dfeb-490d-a675-88175cb7f0fb","Type":"ContainerDied","Data":"d5bf44ad97bd61215d9f26ce45315f577fd75b93b1a9b9a3bb113b104a998eab"} Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.148038 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.148095 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.174488 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.181905 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.260956 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.339099 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.339407 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="dnsmasq-dns" containerID="cri-o://0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f" gracePeriod=10 Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.920219 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.925593 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerStarted","Data":"91905d79615d3e9a43d074de4ee94acfa48c2b92369065bbfd487a625ceedb36"} Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.925718 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.927500 4762 generic.go:334] "Generic (PLEG): container finished" podID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerID="0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f" exitCode=0 Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.927576 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" event={"ID":"9f0b97b7-6c28-4a8b-99d7-242dde839d36","Type":"ContainerDied","Data":"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f"} Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.927609 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" event={"ID":"9f0b97b7-6c28-4a8b-99d7-242dde839d36","Type":"ContainerDied","Data":"1b72ffc09e07f22fe0fa48cdce8fa494f89d130bd86cec83b9160981711581fc"} Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.927627 4762 scope.go:117] "RemoveContainer" containerID="0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.927844 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-9ldwb" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.936229 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.936388 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.230:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.970847 4762 scope.go:117] "RemoveContainer" containerID="4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2" Mar 08 00:47:16 crc kubenswrapper[4762]: I0308 00:47:16.990444 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.035739 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.472498568 podStartE2EDuration="5.035716885s" podCreationTimestamp="2026-03-08 00:47:12 +0000 UTC" firstStartedPulling="2026-03-08 00:47:12.935324788 +0000 UTC m=+1454.409469132" lastFinishedPulling="2026-03-08 00:47:16.498543105 +0000 UTC m=+1457.972687449" observedRunningTime="2026-03-08 00:47:17.003993638 +0000 UTC m=+1458.478137982" watchObservedRunningTime="2026-03-08 00:47:17.035716885 +0000 UTC m=+1458.509861229" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.037605 4762 scope.go:117] "RemoveContainer" containerID="0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f" Mar 08 00:47:17 crc kubenswrapper[4762]: E0308 00:47:17.041774 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f\": container with ID starting with 0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f not found: ID does not exist" containerID="0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.041811 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f"} err="failed to get container status \"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f\": rpc error: code = NotFound desc = could not find container \"0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f\": container with ID starting with 0b8e68c838631dbf9e9e00872fdb789ac25f3d12eab8270e6360df78aba5836f not found: ID does not exist" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.041832 4762 scope.go:117] "RemoveContainer" containerID="4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2" Mar 08 00:47:17 crc kubenswrapper[4762]: E0308 00:47:17.049875 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2\": container with ID starting with 4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2 not found: ID does not exist" containerID="4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.049915 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2"} err="failed to get container status \"4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2\": rpc error: code = NotFound desc = could not find container \"4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2\": container with ID starting with 4e8c57df4a57e3c3cab7d7976b325a59d275aa08309748ab23d871b41b38ede2 not found: ID does not exist" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.090721 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rg2\" (UniqueName: \"kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.090789 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.090904 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.091025 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.091074 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.091114 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb\") pod \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\" (UID: \"9f0b97b7-6c28-4a8b-99d7-242dde839d36\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.122040 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2" (OuterVolumeSpecName: "kube-api-access-d2rg2") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "kube-api-access-d2rg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.161605 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config" (OuterVolumeSpecName: "config") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.169251 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.193056 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.193085 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rg2\" (UniqueName: \"kubernetes.io/projected/9f0b97b7-6c28-4a8b-99d7-242dde839d36-kube-api-access-d2rg2\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.193100 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.203183 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.205740 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.235785 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f0b97b7-6c28-4a8b-99d7-242dde839d36" (UID: "9f0b97b7-6c28-4a8b-99d7-242dde839d36"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.296449 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.296481 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.296493 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9f0b97b7-6c28-4a8b-99d7-242dde839d36-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.554836 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.565068 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-9ldwb"] Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.761257 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.765035 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.780614 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915010 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts\") pod \"06e208b1-f109-4635-89ac-399a6421162f\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915056 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data\") pod \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915086 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts\") pod \"a237e39c-dfeb-490d-a675-88175cb7f0fb\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915134 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6z24\" (UniqueName: \"kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24\") pod \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915190 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts\") pod \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915276 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle\") pod \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\" (UID: \"7f3ba52d-9795-4455-bc4a-2469ed8b73df\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915415 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsp4h\" (UniqueName: \"kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h\") pod \"06e208b1-f109-4635-89ac-399a6421162f\" (UID: \"06e208b1-f109-4635-89ac-399a6421162f\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.915496 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pd5\" (UniqueName: \"kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5\") pod \"a237e39c-dfeb-490d-a675-88175cb7f0fb\" (UID: \"a237e39c-dfeb-490d-a675-88175cb7f0fb\") " Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.929120 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a237e39c-dfeb-490d-a675-88175cb7f0fb" (UID: "a237e39c-dfeb-490d-a675-88175cb7f0fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.930825 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06e208b1-f109-4635-89ac-399a6421162f" (UID: "06e208b1-f109-4635-89ac-399a6421162f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.931019 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5" (OuterVolumeSpecName: "kube-api-access-n8pd5") pod "a237e39c-dfeb-490d-a675-88175cb7f0fb" (UID: "a237e39c-dfeb-490d-a675-88175cb7f0fb"). InnerVolumeSpecName "kube-api-access-n8pd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.931191 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h" (OuterVolumeSpecName: "kube-api-access-nsp4h") pod "06e208b1-f109-4635-89ac-399a6421162f" (UID: "06e208b1-f109-4635-89ac-399a6421162f"). InnerVolumeSpecName "kube-api-access-nsp4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.948887 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24" (OuterVolumeSpecName: "kube-api-access-f6z24") pod "7f3ba52d-9795-4455-bc4a-2469ed8b73df" (UID: "7f3ba52d-9795-4455-bc4a-2469ed8b73df"). InnerVolumeSpecName "kube-api-access-f6z24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.953899 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts" (OuterVolumeSpecName: "scripts") pod "7f3ba52d-9795-4455-bc4a-2469ed8b73df" (UID: "7f3ba52d-9795-4455-bc4a-2469ed8b73df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.971587 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-6crtj" event={"ID":"7f3ba52d-9795-4455-bc4a-2469ed8b73df","Type":"ContainerDied","Data":"9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab"} Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.971621 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a9c889a72fb08f21c364b1d8391c1ca5f8a08730555a8342a728d5ff5dcdcab" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.971696 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-6crtj" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.973449 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-5xq4s" event={"ID":"a237e39c-dfeb-490d-a675-88175cb7f0fb","Type":"ContainerDied","Data":"2ab0bbe236288b1693307b7daf8bcbf6fed8f41e75489106d15b5be7498acf4f"} Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.973464 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ab0bbe236288b1693307b7daf8bcbf6fed8f41e75489106d15b5be7498acf4f" Mar 08 00:47:17 crc kubenswrapper[4762]: I0308 00:47:17.973519 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-5xq4s" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.003930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data" (OuterVolumeSpecName: "config-data") pod "7f3ba52d-9795-4455-bc4a-2469ed8b73df" (UID: "7f3ba52d-9795-4455-bc4a-2469ed8b73df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.011000 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f3ba52d-9795-4455-bc4a-2469ed8b73df" (UID: "7f3ba52d-9795-4455-bc4a-2469ed8b73df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.011514 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-2ad8-account-create-update-b6dhz" event={"ID":"06e208b1-f109-4635-89ac-399a6421162f","Type":"ContainerDied","Data":"2274c0ffd9e65b70c2f0092cfc22b965a1f34cd4637318158679b4a815311c81"} Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.011564 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2274c0ffd9e65b70c2f0092cfc22b965a1f34cd4637318158679b4a815311c81" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.011633 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-2ad8-account-create-update-b6dhz" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017586 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pd5\" (UniqueName: \"kubernetes.io/projected/a237e39c-dfeb-490d-a675-88175cb7f0fb-kube-api-access-n8pd5\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017628 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06e208b1-f109-4635-89ac-399a6421162f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017641 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017654 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a237e39c-dfeb-490d-a675-88175cb7f0fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017665 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6z24\" (UniqueName: \"kubernetes.io/projected/7f3ba52d-9795-4455-bc4a-2469ed8b73df-kube-api-access-f6z24\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017675 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017685 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f3ba52d-9795-4455-bc4a-2469ed8b73df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.017696 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsp4h\" (UniqueName: \"kubernetes.io/projected/06e208b1-f109-4635-89ac-399a6421162f-kube-api-access-nsp4h\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.131657 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.131987 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-log" containerID="cri-o://c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839" gracePeriod=30 Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.132779 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-api" containerID="cri-o://55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f" gracePeriod=30 Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.149436 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.149664 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-log" containerID="cri-o://d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" gracePeriod=30 Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.149729 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-metadata" containerID="cri-o://c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" gracePeriod=30 Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.338351 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.828543 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.937419 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data\") pod \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.937535 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs\") pod \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.937595 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gmp5\" (UniqueName: \"kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5\") pod \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.937747 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle\") pod \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.937928 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs\") pod \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\" (UID: \"a895fdac-bfcc-496c-9f8b-8e1d2abc733e\") " Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.938334 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs" (OuterVolumeSpecName: "logs") pod "a895fdac-bfcc-496c-9f8b-8e1d2abc733e" (UID: "a895fdac-bfcc-496c-9f8b-8e1d2abc733e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.943930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5" (OuterVolumeSpecName: "kube-api-access-2gmp5") pod "a895fdac-bfcc-496c-9f8b-8e1d2abc733e" (UID: "a895fdac-bfcc-496c-9f8b-8e1d2abc733e"). InnerVolumeSpecName "kube-api-access-2gmp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.981152 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data" (OuterVolumeSpecName: "config-data") pod "a895fdac-bfcc-496c-9f8b-8e1d2abc733e" (UID: "a895fdac-bfcc-496c-9f8b-8e1d2abc733e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:18 crc kubenswrapper[4762]: I0308 00:47:18.989839 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a895fdac-bfcc-496c-9f8b-8e1d2abc733e" (UID: "a895fdac-bfcc-496c-9f8b-8e1d2abc733e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024207 4762 generic.go:334] "Generic (PLEG): container finished" podID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerID="c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" exitCode=0 Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024237 4762 generic.go:334] "Generic (PLEG): container finished" podID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerID="d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" exitCode=143 Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024271 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerDied","Data":"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a"} Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024295 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerDied","Data":"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509"} Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024305 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a895fdac-bfcc-496c-9f8b-8e1d2abc733e","Type":"ContainerDied","Data":"d0d1d6a79ae777300a700df0dcab9b6529a896d88d22cbdfb2a5da300fd6146b"} Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024319 4762 scope.go:117] "RemoveContainer" containerID="c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.024434 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.031279 4762 generic.go:334] "Generic (PLEG): container finished" podID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerID="c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839" exitCode=143 Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.031359 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerDied","Data":"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839"} Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.031516 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a0de75d3-2671-4d5d-9bd2-78212160505e" containerName="nova-scheduler-scheduler" containerID="cri-o://eeddcdda7926b57d8eb0a904f85ee9e4f4606f66e8673d7706a3ce4561e77b76" gracePeriod=30 Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.037926 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a895fdac-bfcc-496c-9f8b-8e1d2abc733e" (UID: "a895fdac-bfcc-496c-9f8b-8e1d2abc733e"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.040050 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.040076 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gmp5\" (UniqueName: \"kubernetes.io/projected/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-kube-api-access-2gmp5\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.040089 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.040098 4762 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.040107 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a895fdac-bfcc-496c-9f8b-8e1d2abc733e-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.063865 4762 scope.go:117] "RemoveContainer" containerID="d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.089285 4762 scope.go:117] "RemoveContainer" containerID="c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.089678 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a\": container with ID starting with c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a not found: ID does not exist" containerID="c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.089734 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a"} err="failed to get container status \"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a\": rpc error: code = NotFound desc = could not find container \"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a\": container with ID starting with c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a not found: ID does not exist" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.089774 4762 scope.go:117] "RemoveContainer" containerID="d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.090021 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509\": container with ID starting with d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509 not found: ID does not exist" containerID="d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.090048 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509"} err="failed to get container status \"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509\": rpc error: code = NotFound desc = could not find container \"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509\": container with ID starting with d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509 not found: ID does not exist" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.090063 4762 scope.go:117] "RemoveContainer" containerID="c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.090345 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a"} err="failed to get container status \"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a\": rpc error: code = NotFound desc = could not find container \"c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a\": container with ID starting with c9bd818147a846dcf9c238a91698ac862d47b667000305d8a7ae5e8a5fe09c7a not found: ID does not exist" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.090364 4762 scope.go:117] "RemoveContainer" containerID="d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.090621 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509"} err="failed to get container status \"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509\": rpc error: code = NotFound desc = could not find container \"d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509\": container with ID starting with d355c4dc88f9aa604fe648f4827a8306653d8f7a17ba85d40ce3425a4bc54509 not found: ID does not exist" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.275210 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" path="/var/lib/kubelet/pods/9f0b97b7-6c28-4a8b-99d7-242dde839d36/volumes" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.358966 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.373981 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394203 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394688 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="init" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394706 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="init" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394734 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-metadata" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394741 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-metadata" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394753 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e208b1-f109-4635-89ac-399a6421162f" containerName="mariadb-account-create-update" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394772 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e208b1-f109-4635-89ac-399a6421162f" containerName="mariadb-account-create-update" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394778 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-log" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394783 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-log" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394798 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a237e39c-dfeb-490d-a675-88175cb7f0fb" containerName="mariadb-database-create" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394803 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a237e39c-dfeb-490d-a675-88175cb7f0fb" containerName="mariadb-database-create" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394815 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="dnsmasq-dns" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394820 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="dnsmasq-dns" Mar 08 00:47:19 crc kubenswrapper[4762]: E0308 00:47:19.394840 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f3ba52d-9795-4455-bc4a-2469ed8b73df" containerName="nova-manage" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.394846 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f3ba52d-9795-4455-bc4a-2469ed8b73df" containerName="nova-manage" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395028 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f3ba52d-9795-4455-bc4a-2469ed8b73df" containerName="nova-manage" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395053 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a237e39c-dfeb-490d-a675-88175cb7f0fb" containerName="mariadb-database-create" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395063 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f0b97b7-6c28-4a8b-99d7-242dde839d36" containerName="dnsmasq-dns" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395073 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-metadata" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395082 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" containerName="nova-metadata-log" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.395096 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e208b1-f109-4635-89ac-399a6421162f" containerName="mariadb-account-create-update" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.396135 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.398123 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.401618 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.423276 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.448450 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.448549 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.448591 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.448637 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.448691 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dm4\" (UniqueName: \"kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.550610 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dm4\" (UniqueName: \"kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.550706 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.550929 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.550996 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.551058 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.551425 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.561452 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.561501 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.561520 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.566651 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dm4\" (UniqueName: \"kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4\") pod \"nova-metadata-0\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " pod="openstack/nova-metadata-0" Mar 08 00:47:19 crc kubenswrapper[4762]: I0308 00:47:19.731540 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.040603 4762 generic.go:334] "Generic (PLEG): container finished" podID="a0de75d3-2671-4d5d-9bd2-78212160505e" containerID="eeddcdda7926b57d8eb0a904f85ee9e4f4606f66e8673d7706a3ce4561e77b76" exitCode=0 Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.040771 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0de75d3-2671-4d5d-9bd2-78212160505e","Type":"ContainerDied","Data":"eeddcdda7926b57d8eb0a904f85ee9e4f4606f66e8673d7706a3ce4561e77b76"} Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.043509 4762 generic.go:334] "Generic (PLEG): container finished" podID="56650c9a-96a5-4911-8775-8f4c3013053f" containerID="0ad2e2fbc4ee99bb337b6dd9ee75ad05b741c7275b6219774e93d5804e269ede" exitCode=0 Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.043574 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" event={"ID":"56650c9a-96a5-4911-8775-8f4c3013053f","Type":"ContainerDied","Data":"0ad2e2fbc4ee99bb337b6dd9ee75ad05b741c7275b6219774e93d5804e269ede"} Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.236736 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.311894 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.369204 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle\") pod \"a0de75d3-2671-4d5d-9bd2-78212160505e\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.369525 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rppjv\" (UniqueName: \"kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv\") pod \"a0de75d3-2671-4d5d-9bd2-78212160505e\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.369616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data\") pod \"a0de75d3-2671-4d5d-9bd2-78212160505e\" (UID: \"a0de75d3-2671-4d5d-9bd2-78212160505e\") " Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.375411 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv" (OuterVolumeSpecName: "kube-api-access-rppjv") pod "a0de75d3-2671-4d5d-9bd2-78212160505e" (UID: "a0de75d3-2671-4d5d-9bd2-78212160505e"). InnerVolumeSpecName "kube-api-access-rppjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.404455 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0de75d3-2671-4d5d-9bd2-78212160505e" (UID: "a0de75d3-2671-4d5d-9bd2-78212160505e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.409660 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data" (OuterVolumeSpecName: "config-data") pod "a0de75d3-2671-4d5d-9bd2-78212160505e" (UID: "a0de75d3-2671-4d5d-9bd2-78212160505e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.471877 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.472121 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rppjv\" (UniqueName: \"kubernetes.io/projected/a0de75d3-2671-4d5d-9bd2-78212160505e-kube-api-access-rppjv\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:20 crc kubenswrapper[4762]: I0308 00:47:20.472131 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0de75d3-2671-4d5d-9bd2-78212160505e-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.055083 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerStarted","Data":"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225"} Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.055144 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerStarted","Data":"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492"} Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.055160 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerStarted","Data":"a79e5d2c8beaa1f3a4e13957c3d09f650b67c54b0a4c0d4a46f01dd347f16ec5"} Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.057306 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.057334 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a0de75d3-2671-4d5d-9bd2-78212160505e","Type":"ContainerDied","Data":"079ea5ae95a1d59d3c3a609281492d192ddbe56c44f4a62fc72339f46d12096a"} Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.058239 4762 scope.go:117] "RemoveContainer" containerID="eeddcdda7926b57d8eb0a904f85ee9e4f4606f66e8673d7706a3ce4561e77b76" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.085750 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.085732517 podStartE2EDuration="2.085732517s" podCreationTimestamp="2026-03-08 00:47:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:21.079095105 +0000 UTC m=+1462.553239449" watchObservedRunningTime="2026-03-08 00:47:21.085732517 +0000 UTC m=+1462.559876861" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.109413 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.119279 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.163829 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:21 crc kubenswrapper[4762]: E0308 00:47:21.165081 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0de75d3-2671-4d5d-9bd2-78212160505e" containerName="nova-scheduler-scheduler" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.165105 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0de75d3-2671-4d5d-9bd2-78212160505e" containerName="nova-scheduler-scheduler" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.165524 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0de75d3-2671-4d5d-9bd2-78212160505e" containerName="nova-scheduler-scheduler" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.166614 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.173164 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.205080 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.288988 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.289072 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vws24\" (UniqueName: \"kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.289110 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.320528 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0de75d3-2671-4d5d-9bd2-78212160505e" path="/var/lib/kubelet/pods/a0de75d3-2671-4d5d-9bd2-78212160505e/volumes" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.321546 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a895fdac-bfcc-496c-9f8b-8e1d2abc733e" path="/var/lib/kubelet/pods/a895fdac-bfcc-496c-9f8b-8e1d2abc733e/volumes" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.396899 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.396982 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vws24\" (UniqueName: \"kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.397028 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.415107 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.415133 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.425468 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vws24\" (UniqueName: \"kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24\") pod \"nova-scheduler-0\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.552915 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.688651 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.811013 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle\") pod \"56650c9a-96a5-4911-8775-8f4c3013053f\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.811067 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data\") pod \"56650c9a-96a5-4911-8775-8f4c3013053f\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.811096 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts\") pod \"56650c9a-96a5-4911-8775-8f4c3013053f\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.811645 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rltmq\" (UniqueName: \"kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq\") pod \"56650c9a-96a5-4911-8775-8f4c3013053f\" (UID: \"56650c9a-96a5-4911-8775-8f4c3013053f\") " Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.815855 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts" (OuterVolumeSpecName: "scripts") pod "56650c9a-96a5-4911-8775-8f4c3013053f" (UID: "56650c9a-96a5-4911-8775-8f4c3013053f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.816023 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq" (OuterVolumeSpecName: "kube-api-access-rltmq") pod "56650c9a-96a5-4911-8775-8f4c3013053f" (UID: "56650c9a-96a5-4911-8775-8f4c3013053f"). InnerVolumeSpecName "kube-api-access-rltmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.845995 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56650c9a-96a5-4911-8775-8f4c3013053f" (UID: "56650c9a-96a5-4911-8775-8f4c3013053f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.853184 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data" (OuterVolumeSpecName: "config-data") pod "56650c9a-96a5-4911-8775-8f4c3013053f" (UID: "56650c9a-96a5-4911-8775-8f4c3013053f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.915142 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.915168 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.915178 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56650c9a-96a5-4911-8775-8f4c3013053f-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:21 crc kubenswrapper[4762]: I0308 00:47:21.915187 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rltmq\" (UniqueName: \"kubernetes.io/projected/56650c9a-96a5-4911-8775-8f4c3013053f-kube-api-access-rltmq\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:22 crc kubenswrapper[4762]: W0308 00:47:22.056350 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af53155_9537_4534_a24f_06d043ca1cef.slice/crio-1d476587f18e4fc143822057aefc2ffde5f09228289100da7dff1654a2491a3f WatchSource:0}: Error finding container 1d476587f18e4fc143822057aefc2ffde5f09228289100da7dff1654a2491a3f: Status 404 returned error can't find the container with id 1d476587f18e4fc143822057aefc2ffde5f09228289100da7dff1654a2491a3f Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.057938 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.077171 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.077182 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-4b7h8" event={"ID":"56650c9a-96a5-4911-8775-8f4c3013053f","Type":"ContainerDied","Data":"598c1a8fb8e831ad6894ce2e221f7f80f2da0afc9899b68d2a402ab6d802a54f"} Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.077528 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="598c1a8fb8e831ad6894ce2e221f7f80f2da0afc9899b68d2a402ab6d802a54f" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.176729 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 08 00:47:22 crc kubenswrapper[4762]: E0308 00:47:22.177126 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56650c9a-96a5-4911-8775-8f4c3013053f" containerName="nova-cell1-conductor-db-sync" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.177142 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="56650c9a-96a5-4911-8775-8f4c3013053f" containerName="nova-cell1-conductor-db-sync" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.177346 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="56650c9a-96a5-4911-8775-8f4c3013053f" containerName="nova-cell1-conductor-db-sync" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.178007 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.180780 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.192089 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.226051 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.226226 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8mpq\" (UniqueName: \"kubernetes.io/projected/88cfd032-2d2e-4680-bbcb-22eac7f47578-kube-api-access-j8mpq\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.226286 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.328181 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8mpq\" (UniqueName: \"kubernetes.io/projected/88cfd032-2d2e-4680-bbcb-22eac7f47578-kube-api-access-j8mpq\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.328491 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.328679 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.331593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.332029 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd032-2d2e-4680-bbcb-22eac7f47578-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.343394 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8mpq\" (UniqueName: \"kubernetes.io/projected/88cfd032-2d2e-4680-bbcb-22eac7f47578-kube-api-access-j8mpq\") pod \"nova-cell1-conductor-0\" (UID: \"88cfd032-2d2e-4680-bbcb-22eac7f47578\") " pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:22 crc kubenswrapper[4762]: I0308 00:47:22.566063 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:23 crc kubenswrapper[4762]: W0308 00:47:23.076336 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cfd032_2d2e_4680_bbcb_22eac7f47578.slice/crio-de8b3ba348557b7f93690d681c445c9bb1bccfdc58c16f7eb98c437812e807bc WatchSource:0}: Error finding container de8b3ba348557b7f93690d681c445c9bb1bccfdc58c16f7eb98c437812e807bc: Status 404 returned error can't find the container with id de8b3ba348557b7f93690d681c445c9bb1bccfdc58c16f7eb98c437812e807bc Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.092915 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"88cfd032-2d2e-4680-bbcb-22eac7f47578","Type":"ContainerStarted","Data":"de8b3ba348557b7f93690d681c445c9bb1bccfdc58c16f7eb98c437812e807bc"} Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.098997 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5af53155-9537-4534-a24f-06d043ca1cef","Type":"ContainerStarted","Data":"7f9ec88c92cef509d85ccdc3804b445e7709fc5db8621103f6a44b2fd39e6a20"} Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.099057 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5af53155-9537-4534-a24f-06d043ca1cef","Type":"ContainerStarted","Data":"1d476587f18e4fc143822057aefc2ffde5f09228289100da7dff1654a2491a3f"} Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.106213 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.133289 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.133271065 podStartE2EDuration="2.133271065s" podCreationTimestamp="2026-03-08 00:47:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:23.118229996 +0000 UTC m=+1464.592374340" watchObservedRunningTime="2026-03-08 00:47:23.133271065 +0000 UTC m=+1464.607415409" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.526866 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-snqd5"] Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.528148 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.531731 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.531927 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kqtwz" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.531937 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.532063 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.543730 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-snqd5"] Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.655887 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.656012 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.656034 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stspk\" (UniqueName: \"kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.656077 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.758354 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.758397 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stspk\" (UniqueName: \"kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.758453 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.758524 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.765359 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.778297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.778906 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stspk\" (UniqueName: \"kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.784365 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle\") pod \"aodh-db-sync-snqd5\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:23 crc kubenswrapper[4762]: I0308 00:47:23.886054 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.040588 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.064432 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bblj\" (UniqueName: \"kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj\") pod \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.064528 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data\") pod \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.064559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs\") pod \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.064600 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle\") pod \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\" (UID: \"77f66769-f7d1-4efd-b2ea-7cdcebf49500\") " Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.065462 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs" (OuterVolumeSpecName: "logs") pod "77f66769-f7d1-4efd-b2ea-7cdcebf49500" (UID: "77f66769-f7d1-4efd-b2ea-7cdcebf49500"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.084946 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj" (OuterVolumeSpecName: "kube-api-access-5bblj") pod "77f66769-f7d1-4efd-b2ea-7cdcebf49500" (UID: "77f66769-f7d1-4efd-b2ea-7cdcebf49500"). InnerVolumeSpecName "kube-api-access-5bblj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.103166 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77f66769-f7d1-4efd-b2ea-7cdcebf49500" (UID: "77f66769-f7d1-4efd-b2ea-7cdcebf49500"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.118366 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data" (OuterVolumeSpecName: "config-data") pod "77f66769-f7d1-4efd-b2ea-7cdcebf49500" (UID: "77f66769-f7d1-4efd-b2ea-7cdcebf49500"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.150931 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"88cfd032-2d2e-4680-bbcb-22eac7f47578","Type":"ContainerStarted","Data":"1b6ae817df419b8009a45cfe47381578c02144f0b49c1ea62ff08261baf71405"} Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.151669 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.154991 4762 generic.go:334] "Generic (PLEG): container finished" podID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerID="55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f" exitCode=0 Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.155115 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.155160 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerDied","Data":"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f"} Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.155192 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"77f66769-f7d1-4efd-b2ea-7cdcebf49500","Type":"ContainerDied","Data":"3567e7e938cd12a349a834a386397d0975376a3c6a42a8e378a7dcf35afcc3e5"} Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.155224 4762 scope.go:117] "RemoveContainer" containerID="55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.172153 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bblj\" (UniqueName: \"kubernetes.io/projected/77f66769-f7d1-4efd-b2ea-7cdcebf49500-kube-api-access-5bblj\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.172193 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.172207 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f66769-f7d1-4efd-b2ea-7cdcebf49500-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.172225 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f66769-f7d1-4efd-b2ea-7cdcebf49500-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.189850 4762 scope.go:117] "RemoveContainer" containerID="c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.202001 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.201978533 podStartE2EDuration="2.201978533s" podCreationTimestamp="2026-03-08 00:47:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:24.167919165 +0000 UTC m=+1465.642063509" watchObservedRunningTime="2026-03-08 00:47:24.201978533 +0000 UTC m=+1465.676122877" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.232080 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.233336 4762 scope.go:117] "RemoveContainer" containerID="55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f" Mar 08 00:47:24 crc kubenswrapper[4762]: E0308 00:47:24.237533 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f\": container with ID starting with 55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f not found: ID does not exist" containerID="55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.237574 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f"} err="failed to get container status \"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f\": rpc error: code = NotFound desc = could not find container \"55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f\": container with ID starting with 55911e487d11140e85b9962185a42688c70f5e60f948641e3c35216c0a29145f not found: ID does not exist" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.237618 4762 scope.go:117] "RemoveContainer" containerID="c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839" Mar 08 00:47:24 crc kubenswrapper[4762]: E0308 00:47:24.238157 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839\": container with ID starting with c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839 not found: ID does not exist" containerID="c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.238196 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839"} err="failed to get container status \"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839\": rpc error: code = NotFound desc = could not find container \"c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839\": container with ID starting with c90a44801bdedf1a499f84f5d11b0363c66a3fe33c07279e7f3cea377a607839 not found: ID does not exist" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.246925 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.285468 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:24 crc kubenswrapper[4762]: E0308 00:47:24.285918 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-api" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.285930 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-api" Mar 08 00:47:24 crc kubenswrapper[4762]: E0308 00:47:24.285942 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-log" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.285948 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-log" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.286232 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-api" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.286250 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" containerName="nova-api-log" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.287531 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.298543 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.333642 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.474610 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-snqd5"] Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.487628 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.487667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.487952 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6pl\" (UniqueName: \"kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.488068 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.590545 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.590772 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.590848 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6pl\" (UniqueName: \"kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.590889 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.591227 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.599810 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.612770 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.616487 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6pl\" (UniqueName: \"kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl\") pod \"nova-api-0\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.624594 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.739235 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 08 00:47:24 crc kubenswrapper[4762]: I0308 00:47:24.740305 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 08 00:47:25 crc kubenswrapper[4762]: I0308 00:47:25.136350 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:25 crc kubenswrapper[4762]: W0308 00:47:25.145023 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24b679d5_59e9_4303_a0f4_54dc3e7dd056.slice/crio-460e64a605aac5376b947868ea45633ac04d59f4bd67906e6a7784ae009c4b98 WatchSource:0}: Error finding container 460e64a605aac5376b947868ea45633ac04d59f4bd67906e6a7784ae009c4b98: Status 404 returned error can't find the container with id 460e64a605aac5376b947868ea45633ac04d59f4bd67906e6a7784ae009c4b98 Mar 08 00:47:25 crc kubenswrapper[4762]: I0308 00:47:25.169211 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-snqd5" event={"ID":"cd20c5d8-125b-4519-be85-bd0b7d23c141","Type":"ContainerStarted","Data":"c307f6b8751fdee91383262dfee3eb09b43efd5fd189c8ef4b5b023543792f4b"} Mar 08 00:47:25 crc kubenswrapper[4762]: I0308 00:47:25.170620 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerStarted","Data":"460e64a605aac5376b947868ea45633ac04d59f4bd67906e6a7784ae009c4b98"} Mar 08 00:47:25 crc kubenswrapper[4762]: I0308 00:47:25.280065 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f66769-f7d1-4efd-b2ea-7cdcebf49500" path="/var/lib/kubelet/pods/77f66769-f7d1-4efd-b2ea-7cdcebf49500/volumes" Mar 08 00:47:26 crc kubenswrapper[4762]: I0308 00:47:26.183090 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerStarted","Data":"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515"} Mar 08 00:47:26 crc kubenswrapper[4762]: I0308 00:47:26.183711 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerStarted","Data":"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729"} Mar 08 00:47:26 crc kubenswrapper[4762]: I0308 00:47:26.215607 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.215585017 podStartE2EDuration="2.215585017s" podCreationTimestamp="2026-03-08 00:47:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:26.203909531 +0000 UTC m=+1467.678053895" watchObservedRunningTime="2026-03-08 00:47:26.215585017 +0000 UTC m=+1467.689729361" Mar 08 00:47:26 crc kubenswrapper[4762]: I0308 00:47:26.553870 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 08 00:47:29 crc kubenswrapper[4762]: I0308 00:47:29.220693 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-snqd5" event={"ID":"cd20c5d8-125b-4519-be85-bd0b7d23c141","Type":"ContainerStarted","Data":"2ad145ab5f5be59731a223760e0a734bfaf8e75a759e243e8d17d9723503be74"} Mar 08 00:47:29 crc kubenswrapper[4762]: I0308 00:47:29.250479 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-snqd5" podStartSLOduration=1.957006302 podStartE2EDuration="6.250463883s" podCreationTimestamp="2026-03-08 00:47:23 +0000 UTC" firstStartedPulling="2026-03-08 00:47:24.494929821 +0000 UTC m=+1465.969074165" lastFinishedPulling="2026-03-08 00:47:28.788387372 +0000 UTC m=+1470.262531746" observedRunningTime="2026-03-08 00:47:29.245080959 +0000 UTC m=+1470.719225313" watchObservedRunningTime="2026-03-08 00:47:29.250463883 +0000 UTC m=+1470.724608227" Mar 08 00:47:29 crc kubenswrapper[4762]: I0308 00:47:29.734810 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 08 00:47:29 crc kubenswrapper[4762]: I0308 00:47:29.734920 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 08 00:47:30 crc kubenswrapper[4762]: I0308 00:47:30.744972 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:30 crc kubenswrapper[4762]: I0308 00:47:30.745020 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:31 crc kubenswrapper[4762]: I0308 00:47:31.553453 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 08 00:47:31 crc kubenswrapper[4762]: I0308 00:47:31.618245 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 08 00:47:32 crc kubenswrapper[4762]: I0308 00:47:32.274158 4762 generic.go:334] "Generic (PLEG): container finished" podID="cd20c5d8-125b-4519-be85-bd0b7d23c141" containerID="2ad145ab5f5be59731a223760e0a734bfaf8e75a759e243e8d17d9723503be74" exitCode=0 Mar 08 00:47:32 crc kubenswrapper[4762]: I0308 00:47:32.274290 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-snqd5" event={"ID":"cd20c5d8-125b-4519-be85-bd0b7d23c141","Type":"ContainerDied","Data":"2ad145ab5f5be59731a223760e0a734bfaf8e75a759e243e8d17d9723503be74"} Mar 08 00:47:32 crc kubenswrapper[4762]: I0308 00:47:32.333674 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 08 00:47:32 crc kubenswrapper[4762]: I0308 00:47:32.607307 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.790269 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.906724 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts\") pod \"cd20c5d8-125b-4519-be85-bd0b7d23c141\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.906930 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stspk\" (UniqueName: \"kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk\") pod \"cd20c5d8-125b-4519-be85-bd0b7d23c141\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.906962 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data\") pod \"cd20c5d8-125b-4519-be85-bd0b7d23c141\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.907051 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle\") pod \"cd20c5d8-125b-4519-be85-bd0b7d23c141\" (UID: \"cd20c5d8-125b-4519-be85-bd0b7d23c141\") " Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.913992 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts" (OuterVolumeSpecName: "scripts") pod "cd20c5d8-125b-4519-be85-bd0b7d23c141" (UID: "cd20c5d8-125b-4519-be85-bd0b7d23c141"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.919455 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk" (OuterVolumeSpecName: "kube-api-access-stspk") pod "cd20c5d8-125b-4519-be85-bd0b7d23c141" (UID: "cd20c5d8-125b-4519-be85-bd0b7d23c141"). InnerVolumeSpecName "kube-api-access-stspk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.937205 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data" (OuterVolumeSpecName: "config-data") pod "cd20c5d8-125b-4519-be85-bd0b7d23c141" (UID: "cd20c5d8-125b-4519-be85-bd0b7d23c141"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:33 crc kubenswrapper[4762]: I0308 00:47:33.939111 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd20c5d8-125b-4519-be85-bd0b7d23c141" (UID: "cd20c5d8-125b-4519-be85-bd0b7d23c141"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.009997 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stspk\" (UniqueName: \"kubernetes.io/projected/cd20c5d8-125b-4519-be85-bd0b7d23c141-kube-api-access-stspk\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.010032 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.010042 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.010052 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd20c5d8-125b-4519-be85-bd0b7d23c141-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.311480 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-snqd5" event={"ID":"cd20c5d8-125b-4519-be85-bd0b7d23c141","Type":"ContainerDied","Data":"c307f6b8751fdee91383262dfee3eb09b43efd5fd189c8ef4b5b023543792f4b"} Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.311675 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c307f6b8751fdee91383262dfee3eb09b43efd5fd189c8ef4b5b023543792f4b" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.311572 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-snqd5" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.625403 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:47:34 crc kubenswrapper[4762]: I0308 00:47:34.625714 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:47:35 crc kubenswrapper[4762]: I0308 00:47:35.708036 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:35 crc kubenswrapper[4762]: I0308 00:47:35.708057 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.244:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.219006 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 08 00:47:38 crc kubenswrapper[4762]: E0308 00:47:38.219918 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd20c5d8-125b-4519-be85-bd0b7d23c141" containerName="aodh-db-sync" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.219937 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd20c5d8-125b-4519-be85-bd0b7d23c141" containerName="aodh-db-sync" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.220181 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd20c5d8-125b-4519-be85-bd0b7d23c141" containerName="aodh-db-sync" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.222505 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.225252 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfd8w\" (UniqueName: \"kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.225366 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.225498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.225625 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.229995 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kqtwz" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.230219 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.230613 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.237478 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.326946 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfd8w\" (UniqueName: \"kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.327319 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.327414 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.327480 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.339370 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.339960 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.340193 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.347374 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfd8w\" (UniqueName: \"kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w\") pod \"aodh-0\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " pod="openstack/aodh-0" Mar 08 00:47:38 crc kubenswrapper[4762]: I0308 00:47:38.551604 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:47:39 crc kubenswrapper[4762]: I0308 00:47:39.129411 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:47:39 crc kubenswrapper[4762]: I0308 00:47:39.380365 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerStarted","Data":"a7d163abbc390aa4e1604f067c484c47789e1d4935e99880e35b0559c241cffa"} Mar 08 00:47:39 crc kubenswrapper[4762]: I0308 00:47:39.738648 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 08 00:47:39 crc kubenswrapper[4762]: I0308 00:47:39.739534 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 08 00:47:39 crc kubenswrapper[4762]: I0308 00:47:39.747490 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.404107 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerStarted","Data":"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4"} Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.420991 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.425717 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.426064 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-central-agent" containerID="cri-o://973fbd398fb56eb76a78c13f247426ba7cba02ba71382588679df59c2bfda3ea" gracePeriod=30 Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.426585 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="sg-core" containerID="cri-o://c3cafd82f77a0db7e0a3fe9fc65b9ef96d9d186b354d145198ab88a96bbe1ec7" gracePeriod=30 Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.426707 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="proxy-httpd" containerID="cri-o://91905d79615d3e9a43d074de4ee94acfa48c2b92369065bbfd487a625ceedb36" gracePeriod=30 Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.426810 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-notification-agent" containerID="cri-o://a26bdf0a7ea35fb84ed504a559d33acfd3a4d73ae04389e6e1c581648ae1d2b8" gracePeriod=30 Mar 08 00:47:40 crc kubenswrapper[4762]: I0308 00:47:40.449453 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.240633 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.414509 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerStarted","Data":"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417153 4762 generic.go:334] "Generic (PLEG): container finished" podID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerID="91905d79615d3e9a43d074de4ee94acfa48c2b92369065bbfd487a625ceedb36" exitCode=0 Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417178 4762 generic.go:334] "Generic (PLEG): container finished" podID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerID="c3cafd82f77a0db7e0a3fe9fc65b9ef96d9d186b354d145198ab88a96bbe1ec7" exitCode=2 Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417186 4762 generic.go:334] "Generic (PLEG): container finished" podID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerID="a26bdf0a7ea35fb84ed504a559d33acfd3a4d73ae04389e6e1c581648ae1d2b8" exitCode=0 Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417193 4762 generic.go:334] "Generic (PLEG): container finished" podID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerID="973fbd398fb56eb76a78c13f247426ba7cba02ba71382588679df59c2bfda3ea" exitCode=0 Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417902 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerDied","Data":"91905d79615d3e9a43d074de4ee94acfa48c2b92369065bbfd487a625ceedb36"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417969 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerDied","Data":"c3cafd82f77a0db7e0a3fe9fc65b9ef96d9d186b354d145198ab88a96bbe1ec7"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.417990 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerDied","Data":"a26bdf0a7ea35fb84ed504a559d33acfd3a4d73ae04389e6e1c581648ae1d2b8"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.418005 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerDied","Data":"973fbd398fb56eb76a78c13f247426ba7cba02ba71382588679df59c2bfda3ea"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.418017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7","Type":"ContainerDied","Data":"78bb2ffe38f0c448e0b083f4fc4242fc92e16b6efcb16a5c5607329bb46efc59"} Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.418027 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78bb2ffe38f0c448e0b083f4fc4242fc92e16b6efcb16a5c5607329bb46efc59" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.451149 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.507423 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.511295 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613103 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613154 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613196 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613263 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mnmq\" (UniqueName: \"kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613383 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613431 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts\") pod \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\" (UID: \"2f7a2da2-b29f-404e-ae2d-6ad86b15cef7\") " Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.613956 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.614082 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.620901 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts" (OuterVolumeSpecName: "scripts") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.623482 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq" (OuterVolumeSpecName: "kube-api-access-8mnmq") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "kube-api-access-8mnmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.650449 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.717417 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.717446 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.717455 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.717465 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mnmq\" (UniqueName: \"kubernetes.io/projected/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-kube-api-access-8mnmq\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.734392 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.746011 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data" (OuterVolumeSpecName: "config-data") pod "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" (UID: "2f7a2da2-b29f-404e-ae2d-6ad86b15cef7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.819291 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:41 crc kubenswrapper[4762]: I0308 00:47:41.819326 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.426503 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.479108 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.494413 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.511519 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:42 crc kubenswrapper[4762]: E0308 00:47:42.512135 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="proxy-httpd" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512214 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="proxy-httpd" Mar 08 00:47:42 crc kubenswrapper[4762]: E0308 00:47:42.512302 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-notification-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512377 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-notification-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: E0308 00:47:42.512436 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-central-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512491 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-central-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: E0308 00:47:42.512546 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="sg-core" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512602 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="sg-core" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512843 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-central-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512926 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="ceilometer-notification-agent" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.512992 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="sg-core" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.513047 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" containerName="proxy-httpd" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.514794 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.518277 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.518662 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.528556 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.778616 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfx8\" (UniqueName: \"kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.778889 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.779219 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.779251 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.779284 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.779303 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.779344 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881058 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881162 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfx8\" (UniqueName: \"kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881182 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881279 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881295 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881314 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.881331 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.882439 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.884427 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.888822 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.891598 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.891677 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.891704 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.901406 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfx8\" (UniqueName: \"kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8\") pod \"ceilometer-0\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " pod="openstack/ceilometer-0" Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.923652 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:42 crc kubenswrapper[4762]: I0308 00:47:42.924955 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.277850 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f7a2da2-b29f-404e-ae2d-6ad86b15cef7" path="/var/lib/kubelet/pods/2f7a2da2-b29f-404e-ae2d-6ad86b15cef7/volumes" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.437496 4762 generic.go:334] "Generic (PLEG): container finished" podID="ede839bd-0b3d-40a3-993e-99df5675f617" containerID="0ade3523ef37f533ab708b0a2d54f5af109ed7d5e82ff7d4ba251836373561bf" exitCode=137 Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.437534 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ede839bd-0b3d-40a3-993e-99df5675f617","Type":"ContainerDied","Data":"0ade3523ef37f533ab708b0a2d54f5af109ed7d5e82ff7d4ba251836373561bf"} Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.466898 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:43 crc kubenswrapper[4762]: W0308 00:47:43.538966 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b6d410e_5c92_45b9_a1d2_d7da611f7adf.slice/crio-e9315739513637b02382eddc6c718e5ab4abb44f55d4855e49a730e241456ae7 WatchSource:0}: Error finding container e9315739513637b02382eddc6c718e5ab4abb44f55d4855e49a730e241456ae7: Status 404 returned error can't find the container with id e9315739513637b02382eddc6c718e5ab4abb44f55d4855e49a730e241456ae7 Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.607374 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.693799 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data\") pod \"ede839bd-0b3d-40a3-993e-99df5675f617\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.693853 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle\") pod \"ede839bd-0b3d-40a3-993e-99df5675f617\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.693976 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86nn5\" (UniqueName: \"kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5\") pod \"ede839bd-0b3d-40a3-993e-99df5675f617\" (UID: \"ede839bd-0b3d-40a3-993e-99df5675f617\") " Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.698275 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5" (OuterVolumeSpecName: "kube-api-access-86nn5") pod "ede839bd-0b3d-40a3-993e-99df5675f617" (UID: "ede839bd-0b3d-40a3-993e-99df5675f617"). InnerVolumeSpecName "kube-api-access-86nn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.729973 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ede839bd-0b3d-40a3-993e-99df5675f617" (UID: "ede839bd-0b3d-40a3-993e-99df5675f617"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.737431 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data" (OuterVolumeSpecName: "config-data") pod "ede839bd-0b3d-40a3-993e-99df5675f617" (UID: "ede839bd-0b3d-40a3-993e-99df5675f617"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.796239 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.796277 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ede839bd-0b3d-40a3-993e-99df5675f617-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:43 crc kubenswrapper[4762]: I0308 00:47:43.796293 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86nn5\" (UniqueName: \"kubernetes.io/projected/ede839bd-0b3d-40a3-993e-99df5675f617-kube-api-access-86nn5\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.454465 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerStarted","Data":"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274"} Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.456797 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerStarted","Data":"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d"} Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.456824 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerStarted","Data":"e9315739513637b02382eddc6c718e5ab4abb44f55d4855e49a730e241456ae7"} Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.459066 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ede839bd-0b3d-40a3-993e-99df5675f617","Type":"ContainerDied","Data":"03177a71684cd21c85897b2145bd46b89d4aa57acc2c1e2a61332b33f0f3c0ea"} Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.459094 4762 scope.go:117] "RemoveContainer" containerID="0ade3523ef37f533ab708b0a2d54f5af109ed7d5e82ff7d4ba251836373561bf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.459212 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.515559 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.527591 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.536119 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:44 crc kubenswrapper[4762]: E0308 00:47:44.536533 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede839bd-0b3d-40a3-993e-99df5675f617" containerName="nova-cell1-novncproxy-novncproxy" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.536562 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede839bd-0b3d-40a3-993e-99df5675f617" containerName="nova-cell1-novncproxy-novncproxy" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.536753 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede839bd-0b3d-40a3-993e-99df5675f617" containerName="nova-cell1-novncproxy-novncproxy" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.537450 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.572306 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.572570 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.572694 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.575347 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.624336 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.624379 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.624412 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6fd8\" (UniqueName: \"kubernetes.io/projected/5fcc2528-a57a-4197-879c-cd345baf4513-kube-api-access-l6fd8\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.624462 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.624649 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.628604 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.628747 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.630527 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.630558 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.640807 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.660209 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.726821 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.727199 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.727274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.727367 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6fd8\" (UniqueName: \"kubernetes.io/projected/5fcc2528-a57a-4197-879c-cd345baf4513-kube-api-access-l6fd8\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.727442 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.737140 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.738196 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.738593 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.744032 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcc2528-a57a-4197-879c-cd345baf4513-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.762492 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6fd8\" (UniqueName: \"kubernetes.io/projected/5fcc2528-a57a-4197-879c-cd345baf4513-kube-api-access-l6fd8\") pod \"nova-cell1-novncproxy-0\" (UID: \"5fcc2528-a57a-4197-879c-cd345baf4513\") " pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.865533 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.867470 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.875643 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.877776 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.936893 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.936941 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.936997 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.937071 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.937099 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:44 crc kubenswrapper[4762]: I0308 00:47:44.937126 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtlv4\" (UniqueName: \"kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.039152 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.040113 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.040247 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.040336 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.041044 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.041667 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtlv4\" (UniqueName: \"kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.041812 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.041614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.040984 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.041122 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.042899 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.067623 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtlv4\" (UniqueName: \"kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4\") pod \"dnsmasq-dns-f84f9ccf-h7wxf\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.194524 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.324959 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede839bd-0b3d-40a3-993e-99df5675f617" path="/var/lib/kubelet/pods/ede839bd-0b3d-40a3-993e-99df5675f617/volumes" Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.515028 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 08 00:47:45 crc kubenswrapper[4762]: I0308 00:47:45.798124 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.500036 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerStarted","Data":"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.500163 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-api" containerID="cri-o://d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4" gracePeriod=30 Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.500223 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-listener" containerID="cri-o://6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9" gracePeriod=30 Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.500243 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-notifier" containerID="cri-o://e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274" gracePeriod=30 Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.500256 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-evaluator" containerID="cri-o://626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111" gracePeriod=30 Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.506697 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fcc2528-a57a-4197-879c-cd345baf4513","Type":"ContainerStarted","Data":"12def14501cb9e979c8db8e21f001d679dd4be9cdd7b46caebd6a5c957d44b30"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.506741 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5fcc2528-a57a-4197-879c-cd345baf4513","Type":"ContainerStarted","Data":"54c823cd73ff509390485c44627b8e5d8ed90162711c3f10a8db97cc8ebbc82b"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.514021 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerStarted","Data":"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.514064 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerStarted","Data":"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.517110 4762 generic.go:334] "Generic (PLEG): container finished" podID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerID="5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94" exitCode=0 Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.517154 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" event={"ID":"12af2dd4-bf20-4ddf-81d2-d27e181b934f","Type":"ContainerDied","Data":"5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.517195 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" event={"ID":"12af2dd4-bf20-4ddf-81d2-d27e181b934f","Type":"ContainerStarted","Data":"0c2435265fc98006d75e5f522d491aa69835417068c610f08ecb76fe95b04a57"} Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.542836 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.536419795 podStartE2EDuration="8.542813908s" podCreationTimestamp="2026-03-08 00:47:38 +0000 UTC" firstStartedPulling="2026-03-08 00:47:39.126780228 +0000 UTC m=+1480.600924562" lastFinishedPulling="2026-03-08 00:47:45.133174331 +0000 UTC m=+1486.607318675" observedRunningTime="2026-03-08 00:47:46.524003025 +0000 UTC m=+1487.998147369" watchObservedRunningTime="2026-03-08 00:47:46.542813908 +0000 UTC m=+1488.016958252" Mar 08 00:47:46 crc kubenswrapper[4762]: I0308 00:47:46.589611 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.589592833 podStartE2EDuration="2.589592833s" podCreationTimestamp="2026-03-08 00:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:46.544278593 +0000 UTC m=+1488.018422937" watchObservedRunningTime="2026-03-08 00:47:46.589592833 +0000 UTC m=+1488.063737167" Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.432489 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.555215 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" event={"ID":"12af2dd4-bf20-4ddf-81d2-d27e181b934f","Type":"ContainerStarted","Data":"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b"} Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.556477 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.565199 4762 generic.go:334] "Generic (PLEG): container finished" podID="478f1074-2af5-4d4a-b503-9e4418586e31" containerID="e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274" exitCode=0 Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.565229 4762 generic.go:334] "Generic (PLEG): container finished" podID="478f1074-2af5-4d4a-b503-9e4418586e31" containerID="626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111" exitCode=0 Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.565238 4762 generic.go:334] "Generic (PLEG): container finished" podID="478f1074-2af5-4d4a-b503-9e4418586e31" containerID="d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4" exitCode=0 Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.566088 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerDied","Data":"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274"} Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.566137 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerDied","Data":"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111"} Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.566148 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerDied","Data":"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4"} Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.566274 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-log" containerID="cri-o://e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729" gracePeriod=30 Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.566539 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-api" containerID="cri-o://256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515" gracePeriod=30 Mar 08 00:47:47 crc kubenswrapper[4762]: I0308 00:47:47.592047 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" podStartSLOduration=3.592025232 podStartE2EDuration="3.592025232s" podCreationTimestamp="2026-03-08 00:47:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:47.579535752 +0000 UTC m=+1489.053680096" watchObservedRunningTime="2026-03-08 00:47:47.592025232 +0000 UTC m=+1489.066169576" Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.578103 4762 generic.go:334] "Generic (PLEG): container finished" podID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerID="e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729" exitCode=143 Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.578194 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerDied","Data":"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729"} Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583528 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerStarted","Data":"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590"} Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583595 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-central-agent" containerID="cri-o://034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d" gracePeriod=30 Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583643 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="proxy-httpd" containerID="cri-o://797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590" gracePeriod=30 Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583662 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="sg-core" containerID="cri-o://1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907" gracePeriod=30 Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583697 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.583699 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-notification-agent" containerID="cri-o://c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3" gracePeriod=30 Mar 08 00:47:48 crc kubenswrapper[4762]: I0308 00:47:48.623142 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.36111014 podStartE2EDuration="6.623120363s" podCreationTimestamp="2026-03-08 00:47:42 +0000 UTC" firstStartedPulling="2026-03-08 00:47:43.569200879 +0000 UTC m=+1485.043345213" lastFinishedPulling="2026-03-08 00:47:47.831211092 +0000 UTC m=+1489.305355436" observedRunningTime="2026-03-08 00:47:48.606092523 +0000 UTC m=+1490.080236907" watchObservedRunningTime="2026-03-08 00:47:48.623120363 +0000 UTC m=+1490.097264717" Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611584 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerID="797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590" exitCode=0 Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611617 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerID="1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907" exitCode=2 Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611625 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerID="c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3" exitCode=0 Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerDied","Data":"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590"} Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611719 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerDied","Data":"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907"} Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.611734 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerDied","Data":"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3"} Mar 08 00:47:49 crc kubenswrapper[4762]: I0308 00:47:49.876513 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.290223 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.399276 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xx6pl\" (UniqueName: \"kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl\") pod \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.399404 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle\") pod \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.399430 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data\") pod \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.399669 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs\") pod \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\" (UID: \"24b679d5-59e9-4303-a0f4-54dc3e7dd056\") " Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.401408 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs" (OuterVolumeSpecName: "logs") pod "24b679d5-59e9-4303-a0f4-54dc3e7dd056" (UID: "24b679d5-59e9-4303-a0f4-54dc3e7dd056"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.407073 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl" (OuterVolumeSpecName: "kube-api-access-xx6pl") pod "24b679d5-59e9-4303-a0f4-54dc3e7dd056" (UID: "24b679d5-59e9-4303-a0f4-54dc3e7dd056"). InnerVolumeSpecName "kube-api-access-xx6pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.453535 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data" (OuterVolumeSpecName: "config-data") pod "24b679d5-59e9-4303-a0f4-54dc3e7dd056" (UID: "24b679d5-59e9-4303-a0f4-54dc3e7dd056"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.457198 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24b679d5-59e9-4303-a0f4-54dc3e7dd056" (UID: "24b679d5-59e9-4303-a0f4-54dc3e7dd056"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.502578 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xx6pl\" (UniqueName: \"kubernetes.io/projected/24b679d5-59e9-4303-a0f4-54dc3e7dd056-kube-api-access-xx6pl\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.502615 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.502625 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24b679d5-59e9-4303-a0f4-54dc3e7dd056-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.502634 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24b679d5-59e9-4303-a0f4-54dc3e7dd056-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.644996 4762 generic.go:334] "Generic (PLEG): container finished" podID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerID="256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515" exitCode=0 Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.645034 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerDied","Data":"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515"} Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.645073 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"24b679d5-59e9-4303-a0f4-54dc3e7dd056","Type":"ContainerDied","Data":"460e64a605aac5376b947868ea45633ac04d59f4bd67906e6a7784ae009c4b98"} Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.645089 4762 scope.go:117] "RemoveContainer" containerID="256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.645245 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.685534 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.700125 4762 scope.go:117] "RemoveContainer" containerID="e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.710436 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.726149 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:51 crc kubenswrapper[4762]: E0308 00:47:51.726739 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-log" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.726779 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-log" Mar 08 00:47:51 crc kubenswrapper[4762]: E0308 00:47:51.726824 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-api" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.726833 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-api" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.727232 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-api" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.727256 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" containerName="nova-api-log" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.728689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.738029 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.738290 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.738486 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.750249 4762 scope.go:117] "RemoveContainer" containerID="256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515" Mar 08 00:47:51 crc kubenswrapper[4762]: E0308 00:47:51.750966 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515\": container with ID starting with 256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515 not found: ID does not exist" containerID="256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.751001 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515"} err="failed to get container status \"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515\": rpc error: code = NotFound desc = could not find container \"256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515\": container with ID starting with 256fac76c17776b9be59aa881c56e7d5842676af558dbb441e55b5a9c711c515 not found: ID does not exist" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.751022 4762 scope.go:117] "RemoveContainer" containerID="e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729" Mar 08 00:47:51 crc kubenswrapper[4762]: E0308 00:47:51.751375 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729\": container with ID starting with e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729 not found: ID does not exist" containerID="e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.751396 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729"} err="failed to get container status \"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729\": rpc error: code = NotFound desc = could not find container \"e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729\": container with ID starting with e2d6a9e5997b0bfdef3f7e53f6614f814d891c4af380e3ff5afad1c2b4c92729 not found: ID does not exist" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.768406 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808084 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnqf\" (UniqueName: \"kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808123 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808188 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808288 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808323 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.808346 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.909634 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.910008 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.910035 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.910071 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnqf\" (UniqueName: \"kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.910091 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.910150 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.911074 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.916111 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.917115 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.917646 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.924532 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:51 crc kubenswrapper[4762]: I0308 00:47:51.929594 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnqf\" (UniqueName: \"kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf\") pod \"nova-api-0\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " pod="openstack/nova-api-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.052302 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.239425 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.321830 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.321863 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.321931 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssfx8\" (UniqueName: \"kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.322015 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.322088 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.322189 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.322258 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd\") pod \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\" (UID: \"1b6d410e-5c92-45b9-a1d2-d7da611f7adf\") " Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.322911 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.323313 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.327261 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8" (OuterVolumeSpecName: "kube-api-access-ssfx8") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "kube-api-access-ssfx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.328276 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts" (OuterVolumeSpecName: "scripts") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.366018 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.426628 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.426656 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssfx8\" (UniqueName: \"kubernetes.io/projected/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-kube-api-access-ssfx8\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.426668 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.426685 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.426696 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.428237 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.449042 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data" (OuterVolumeSpecName: "config-data") pod "1b6d410e-5c92-45b9-a1d2-d7da611f7adf" (UID: "1b6d410e-5c92-45b9-a1d2-d7da611f7adf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.528922 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.528960 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b6d410e-5c92-45b9-a1d2-d7da611f7adf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.646106 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.665270 4762 generic.go:334] "Generic (PLEG): container finished" podID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerID="034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d" exitCode=0 Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.665310 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerDied","Data":"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d"} Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.665335 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b6d410e-5c92-45b9-a1d2-d7da611f7adf","Type":"ContainerDied","Data":"e9315739513637b02382eddc6c718e5ab4abb44f55d4855e49a730e241456ae7"} Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.665352 4762 scope.go:117] "RemoveContainer" containerID="797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.665473 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.711993 4762 scope.go:117] "RemoveContainer" containerID="1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.724015 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.740954 4762 scope.go:117] "RemoveContainer" containerID="c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.745816 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.757431 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.758000 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="proxy-httpd" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758025 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="proxy-httpd" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.758038 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-notification-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758050 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-notification-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.758064 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="sg-core" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758071 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="sg-core" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.758109 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-central-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758118 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-central-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758418 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-central-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758442 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="sg-core" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758457 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="proxy-httpd" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.758475 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" containerName="ceilometer-notification-agent" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.760854 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.763066 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.763261 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.777421 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.780907 4762 scope.go:117] "RemoveContainer" containerID="034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.806468 4762 scope.go:117] "RemoveContainer" containerID="797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.806859 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590\": container with ID starting with 797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590 not found: ID does not exist" containerID="797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.806884 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590"} err="failed to get container status \"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590\": rpc error: code = NotFound desc = could not find container \"797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590\": container with ID starting with 797c0249feded04b7fd17e1ad6cab1564e55a1edf1382451983d7219192ef590 not found: ID does not exist" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.806902 4762 scope.go:117] "RemoveContainer" containerID="1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.807241 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907\": container with ID starting with 1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907 not found: ID does not exist" containerID="1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.807284 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907"} err="failed to get container status \"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907\": rpc error: code = NotFound desc = could not find container \"1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907\": container with ID starting with 1e5b4f86ea4d8a5f66e173080056114d4e551146cbee93832861b8005c73c907 not found: ID does not exist" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.807316 4762 scope.go:117] "RemoveContainer" containerID="c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.807579 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3\": container with ID starting with c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3 not found: ID does not exist" containerID="c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.807605 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3"} err="failed to get container status \"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3\": rpc error: code = NotFound desc = could not find container \"c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3\": container with ID starting with c0d42823d414bd83d78a3d08f0be5611b976611e7f2faf7351733cfa7454a3d3 not found: ID does not exist" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.807622 4762 scope.go:117] "RemoveContainer" containerID="034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d" Mar 08 00:47:52 crc kubenswrapper[4762]: E0308 00:47:52.807894 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d\": container with ID starting with 034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d not found: ID does not exist" containerID="034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.807917 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d"} err="failed to get container status \"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d\": rpc error: code = NotFound desc = could not find container \"034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d\": container with ID starting with 034b6c05e2227b0e833b497eb684f2e3531300f59fa66a08a66bc5a4fe5c195d not found: ID does not exist" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834530 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834565 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834592 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnznh\" (UniqueName: \"kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834610 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834869 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834930 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.834979 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.936538 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.936627 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.936690 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.936736 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.936777 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.937245 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.937269 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnznh\" (UniqueName: \"kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.937295 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.937580 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.941359 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.942217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.942329 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.949970 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:52 crc kubenswrapper[4762]: I0308 00:47:52.956683 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnznh\" (UniqueName: \"kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh\") pod \"ceilometer-0\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " pod="openstack/ceilometer-0" Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.097535 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.321105 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6d410e-5c92-45b9-a1d2-d7da611f7adf" path="/var/lib/kubelet/pods/1b6d410e-5c92-45b9-a1d2-d7da611f7adf/volumes" Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.323069 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b679d5-59e9-4303-a0f4-54dc3e7dd056" path="/var/lib/kubelet/pods/24b679d5-59e9-4303-a0f4-54dc3e7dd056/volumes" Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.647478 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.677751 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerStarted","Data":"6e050096e3039d8f5a5c9e6605107970f5ce4541bc5b5f8c602a7e659a3b06dc"} Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.680070 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerStarted","Data":"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8"} Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.680160 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerStarted","Data":"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f"} Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.680184 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerStarted","Data":"eec7a503bcdd560540a51008decd7f390f5f38cc227e6802a47eca300ef4223e"} Mar 08 00:47:53 crc kubenswrapper[4762]: I0308 00:47:53.701945 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.701921503 podStartE2EDuration="2.701921503s" podCreationTimestamp="2026-03-08 00:47:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:53.698588592 +0000 UTC m=+1495.172732956" watchObservedRunningTime="2026-03-08 00:47:53.701921503 +0000 UTC m=+1495.176065877" Mar 08 00:47:54 crc kubenswrapper[4762]: I0308 00:47:54.698699 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerStarted","Data":"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65"} Mar 08 00:47:54 crc kubenswrapper[4762]: I0308 00:47:54.876851 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:54 crc kubenswrapper[4762]: I0308 00:47:54.905397 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.197050 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.297494 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.297716 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="dnsmasq-dns" containerID="cri-o://445e700e2b418a1af4b38af93a30bd9cd60ae56e121107141194ddc3208c7f21" gracePeriod=10 Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.712033 4762 generic.go:334] "Generic (PLEG): container finished" podID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerID="445e700e2b418a1af4b38af93a30bd9cd60ae56e121107141194ddc3208c7f21" exitCode=0 Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.712330 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" event={"ID":"9299f483-ade6-448b-b5d8-2b39619abd6e","Type":"ContainerDied","Data":"445e700e2b418a1af4b38af93a30bd9cd60ae56e121107141194ddc3208c7f21"} Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.716387 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerStarted","Data":"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d"} Mar 08 00:47:55 crc kubenswrapper[4762]: I0308 00:47:55.733110 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.024271 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-8694c"] Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.025828 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.027455 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.028504 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.039513 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8694c"] Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.073917 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124047 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124120 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124258 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124285 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124310 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124458 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7d284\" (UniqueName: \"kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284\") pod \"9299f483-ade6-448b-b5d8-2b39619abd6e\" (UID: \"9299f483-ade6-448b-b5d8-2b39619abd6e\") " Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124839 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124868 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124930 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.124981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfrwm\" (UniqueName: \"kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.139798 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284" (OuterVolumeSpecName: "kube-api-access-7d284") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "kube-api-access-7d284". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.227185 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.227287 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfrwm\" (UniqueName: \"kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.227454 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.227514 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.227620 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7d284\" (UniqueName: \"kubernetes.io/projected/9299f483-ade6-448b-b5d8-2b39619abd6e-kube-api-access-7d284\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.234039 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.235987 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.237440 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.250647 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfrwm\" (UniqueName: \"kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm\") pod \"nova-cell1-cell-mapping-8694c\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.261593 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.264373 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.268919 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.285184 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config" (OuterVolumeSpecName: "config") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.286916 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9299f483-ade6-448b-b5d8-2b39619abd6e" (UID: "9299f483-ade6-448b-b5d8-2b39619abd6e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.330456 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.330489 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.330499 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.330510 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.330520 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9299f483-ade6-448b-b5d8-2b39619abd6e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.411197 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.745155 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" event={"ID":"9299f483-ade6-448b-b5d8-2b39619abd6e","Type":"ContainerDied","Data":"c6e9bd967dee9dd29ed65a23980cd56c2a24710da755523bcbafed24314fd66e"} Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.745457 4762 scope.go:117] "RemoveContainer" containerID="445e700e2b418a1af4b38af93a30bd9cd60ae56e121107141194ddc3208c7f21" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.745187 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-2g5s9" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.751232 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerStarted","Data":"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a"} Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.780251 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.787086 4762 scope.go:117] "RemoveContainer" containerID="7984706ad04c1cf3c5a77d295994336905c5c86837bf5912b3a19772748ea60b" Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.801303 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-2g5s9"] Mar 08 00:47:56 crc kubenswrapper[4762]: I0308 00:47:56.987687 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-8694c"] Mar 08 00:47:57 crc kubenswrapper[4762]: I0308 00:47:57.276715 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" path="/var/lib/kubelet/pods/9299f483-ade6-448b-b5d8-2b39619abd6e/volumes" Mar 08 00:47:57 crc kubenswrapper[4762]: I0308 00:47:57.765101 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8694c" event={"ID":"03cb9dfe-119e-4bce-808a-375258d654d7","Type":"ContainerStarted","Data":"4a7aecbc7b316b35babbcff875b1276ae13693d361452da3afa93603f93319b9"} Mar 08 00:47:57 crc kubenswrapper[4762]: I0308 00:47:57.765156 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8694c" event={"ID":"03cb9dfe-119e-4bce-808a-375258d654d7","Type":"ContainerStarted","Data":"edf8091f649c8f6688e030045c806b7bf113dcccf00bb9728def7f856ab97dc6"} Mar 08 00:47:58 crc kubenswrapper[4762]: I0308 00:47:58.789069 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerStarted","Data":"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e"} Mar 08 00:47:58 crc kubenswrapper[4762]: I0308 00:47:58.789798 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:47:58 crc kubenswrapper[4762]: I0308 00:47:58.828802 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.352774906 podStartE2EDuration="6.828778679s" podCreationTimestamp="2026-03-08 00:47:52 +0000 UTC" firstStartedPulling="2026-03-08 00:47:53.647300518 +0000 UTC m=+1495.121444862" lastFinishedPulling="2026-03-08 00:47:58.123304291 +0000 UTC m=+1499.597448635" observedRunningTime="2026-03-08 00:47:58.819837936 +0000 UTC m=+1500.293982330" watchObservedRunningTime="2026-03-08 00:47:58.828778679 +0000 UTC m=+1500.302923033" Mar 08 00:47:58 crc kubenswrapper[4762]: I0308 00:47:58.828961 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-8694c" podStartSLOduration=3.828955693 podStartE2EDuration="3.828955693s" podCreationTimestamp="2026-03-08 00:47:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:47:57.786978097 +0000 UTC m=+1499.261122441" watchObservedRunningTime="2026-03-08 00:47:58.828955693 +0000 UTC m=+1500.303100047" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.141598 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548848-5qzjs"] Mar 08 00:48:00 crc kubenswrapper[4762]: E0308 00:48:00.142507 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="dnsmasq-dns" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.142543 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="dnsmasq-dns" Mar 08 00:48:00 crc kubenswrapper[4762]: E0308 00:48:00.142559 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="init" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.142567 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="init" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.142858 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9299f483-ade6-448b-b5d8-2b39619abd6e" containerName="dnsmasq-dns" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.143729 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.151145 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.151291 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.151170 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.157258 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548848-5qzjs"] Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.216873 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2n7\" (UniqueName: \"kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7\") pod \"auto-csr-approver-29548848-5qzjs\" (UID: \"2d06066b-71da-4572-86bd-d6958ec35438\") " pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.318497 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2n7\" (UniqueName: \"kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7\") pod \"auto-csr-approver-29548848-5qzjs\" (UID: \"2d06066b-71da-4572-86bd-d6958ec35438\") " pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.352258 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2n7\" (UniqueName: \"kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7\") pod \"auto-csr-approver-29548848-5qzjs\" (UID: \"2d06066b-71da-4572-86bd-d6958ec35438\") " pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:00 crc kubenswrapper[4762]: I0308 00:48:00.503278 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:01 crc kubenswrapper[4762]: W0308 00:48:01.003260 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d06066b_71da_4572_86bd_d6958ec35438.slice/crio-6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d WatchSource:0}: Error finding container 6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d: Status 404 returned error can't find the container with id 6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d Mar 08 00:48:01 crc kubenswrapper[4762]: I0308 00:48:01.005836 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548848-5qzjs"] Mar 08 00:48:01 crc kubenswrapper[4762]: I0308 00:48:01.824552 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" event={"ID":"2d06066b-71da-4572-86bd-d6958ec35438","Type":"ContainerStarted","Data":"6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d"} Mar 08 00:48:01 crc kubenswrapper[4762]: I0308 00:48:01.826324 4762 generic.go:334] "Generic (PLEG): container finished" podID="03cb9dfe-119e-4bce-808a-375258d654d7" containerID="4a7aecbc7b316b35babbcff875b1276ae13693d361452da3afa93603f93319b9" exitCode=0 Mar 08 00:48:01 crc kubenswrapper[4762]: I0308 00:48:01.826368 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8694c" event={"ID":"03cb9dfe-119e-4bce-808a-375258d654d7","Type":"ContainerDied","Data":"4a7aecbc7b316b35babbcff875b1276ae13693d361452da3afa93603f93319b9"} Mar 08 00:48:02 crc kubenswrapper[4762]: I0308 00:48:02.053305 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:48:02 crc kubenswrapper[4762]: I0308 00:48:02.053355 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:48:02 crc kubenswrapper[4762]: I0308 00:48:02.861217 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" event={"ID":"2d06066b-71da-4572-86bd-d6958ec35438","Type":"ContainerStarted","Data":"fc70616047e64037618c8c631e50d3f2a648acf6d68f5ec4b5a1b3665eaa4418"} Mar 08 00:48:02 crc kubenswrapper[4762]: I0308 00:48:02.888008 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" podStartSLOduration=1.9812028179999999 podStartE2EDuration="2.887985854s" podCreationTimestamp="2026-03-08 00:48:00 +0000 UTC" firstStartedPulling="2026-03-08 00:48:01.007668727 +0000 UTC m=+1502.481813081" lastFinishedPulling="2026-03-08 00:48:01.914451763 +0000 UTC m=+1503.388596117" observedRunningTime="2026-03-08 00:48:02.880112044 +0000 UTC m=+1504.354256378" watchObservedRunningTime="2026-03-08 00:48:02.887985854 +0000 UTC m=+1504.362130208" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.069094 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.249:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.069631 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.249:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.362866 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.496378 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data\") pod \"03cb9dfe-119e-4bce-808a-375258d654d7\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.496477 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts\") pod \"03cb9dfe-119e-4bce-808a-375258d654d7\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.496621 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfrwm\" (UniqueName: \"kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm\") pod \"03cb9dfe-119e-4bce-808a-375258d654d7\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.496692 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle\") pod \"03cb9dfe-119e-4bce-808a-375258d654d7\" (UID: \"03cb9dfe-119e-4bce-808a-375258d654d7\") " Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.517864 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts" (OuterVolumeSpecName: "scripts") pod "03cb9dfe-119e-4bce-808a-375258d654d7" (UID: "03cb9dfe-119e-4bce-808a-375258d654d7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.518407 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm" (OuterVolumeSpecName: "kube-api-access-xfrwm") pod "03cb9dfe-119e-4bce-808a-375258d654d7" (UID: "03cb9dfe-119e-4bce-808a-375258d654d7"). InnerVolumeSpecName "kube-api-access-xfrwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.527386 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data" (OuterVolumeSpecName: "config-data") pod "03cb9dfe-119e-4bce-808a-375258d654d7" (UID: "03cb9dfe-119e-4bce-808a-375258d654d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.540067 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03cb9dfe-119e-4bce-808a-375258d654d7" (UID: "03cb9dfe-119e-4bce-808a-375258d654d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.598560 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfrwm\" (UniqueName: \"kubernetes.io/projected/03cb9dfe-119e-4bce-808a-375258d654d7-kube-api-access-xfrwm\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.598585 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.598594 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.598603 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03cb9dfe-119e-4bce-808a-375258d654d7-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.879188 4762 generic.go:334] "Generic (PLEG): container finished" podID="2d06066b-71da-4572-86bd-d6958ec35438" containerID="fc70616047e64037618c8c631e50d3f2a648acf6d68f5ec4b5a1b3665eaa4418" exitCode=0 Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.879298 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" event={"ID":"2d06066b-71da-4572-86bd-d6958ec35438","Type":"ContainerDied","Data":"fc70616047e64037618c8c631e50d3f2a648acf6d68f5ec4b5a1b3665eaa4418"} Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.883011 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-8694c" event={"ID":"03cb9dfe-119e-4bce-808a-375258d654d7","Type":"ContainerDied","Data":"edf8091f649c8f6688e030045c806b7bf113dcccf00bb9728def7f856ab97dc6"} Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.883038 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf8091f649c8f6688e030045c806b7bf113dcccf00bb9728def7f856ab97dc6" Mar 08 00:48:03 crc kubenswrapper[4762]: I0308 00:48:03.883093 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-8694c" Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.041596 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.041868 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-log" containerID="cri-o://8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f" gracePeriod=30 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.041960 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-api" containerID="cri-o://c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8" gracePeriod=30 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.082772 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.082986 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="5af53155-9537-4534-a24f-06d043ca1cef" containerName="nova-scheduler-scheduler" containerID="cri-o://7f9ec88c92cef509d85ccdc3804b445e7709fc5db8621103f6a44b2fd39e6a20" gracePeriod=30 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.115331 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.115625 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" containerID="cri-o://86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492" gracePeriod=30 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.115711 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" containerID="cri-o://d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225" gracePeriod=30 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.896452 4762 generic.go:334] "Generic (PLEG): container finished" podID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerID="86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492" exitCode=143 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.896512 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerDied","Data":"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492"} Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.899153 4762 generic.go:334] "Generic (PLEG): container finished" podID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerID="8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f" exitCode=143 Mar 08 00:48:04 crc kubenswrapper[4762]: I0308 00:48:04.899278 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerDied","Data":"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f"} Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.388486 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.546548 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf2n7\" (UniqueName: \"kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7\") pod \"2d06066b-71da-4572-86bd-d6958ec35438\" (UID: \"2d06066b-71da-4572-86bd-d6958ec35438\") " Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.554978 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7" (OuterVolumeSpecName: "kube-api-access-hf2n7") pod "2d06066b-71da-4572-86bd-d6958ec35438" (UID: "2d06066b-71da-4572-86bd-d6958ec35438"). InnerVolumeSpecName "kube-api-access-hf2n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.649588 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf2n7\" (UniqueName: \"kubernetes.io/projected/2d06066b-71da-4572-86bd-d6958ec35438-kube-api-access-hf2n7\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.915072 4762 generic.go:334] "Generic (PLEG): container finished" podID="5af53155-9537-4534-a24f-06d043ca1cef" containerID="7f9ec88c92cef509d85ccdc3804b445e7709fc5db8621103f6a44b2fd39e6a20" exitCode=0 Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.915143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5af53155-9537-4534-a24f-06d043ca1cef","Type":"ContainerDied","Data":"7f9ec88c92cef509d85ccdc3804b445e7709fc5db8621103f6a44b2fd39e6a20"} Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.917519 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" event={"ID":"2d06066b-71da-4572-86bd-d6958ec35438","Type":"ContainerDied","Data":"6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d"} Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.917550 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cb533331461f287aff647305e309cd2e4b97fc2acfb3bc9728c69ee0faae04d" Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.917581 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548848-5qzjs" Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.981816 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548842-2fzrv"] Mar 08 00:48:05 crc kubenswrapper[4762]: I0308 00:48:05.991198 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548842-2fzrv"] Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.177077 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.363787 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data\") pod \"5af53155-9537-4534-a24f-06d043ca1cef\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.364033 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle\") pod \"5af53155-9537-4534-a24f-06d043ca1cef\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.364200 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vws24\" (UniqueName: \"kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24\") pod \"5af53155-9537-4534-a24f-06d043ca1cef\" (UID: \"5af53155-9537-4534-a24f-06d043ca1cef\") " Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.370287 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24" (OuterVolumeSpecName: "kube-api-access-vws24") pod "5af53155-9537-4534-a24f-06d043ca1cef" (UID: "5af53155-9537-4534-a24f-06d043ca1cef"). InnerVolumeSpecName "kube-api-access-vws24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.405562 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data" (OuterVolumeSpecName: "config-data") pod "5af53155-9537-4534-a24f-06d043ca1cef" (UID: "5af53155-9537-4534-a24f-06d043ca1cef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.409699 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5af53155-9537-4534-a24f-06d043ca1cef" (UID: "5af53155-9537-4534-a24f-06d043ca1cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.467780 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.467820 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af53155-9537-4534-a24f-06d043ca1cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.467839 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vws24\" (UniqueName: \"kubernetes.io/projected/5af53155-9537-4534-a24f-06d043ca1cef-kube-api-access-vws24\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.933103 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5af53155-9537-4534-a24f-06d043ca1cef","Type":"ContainerDied","Data":"1d476587f18e4fc143822057aefc2ffde5f09228289100da7dff1654a2491a3f"} Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.933166 4762 scope.go:117] "RemoveContainer" containerID="7f9ec88c92cef509d85ccdc3804b445e7709fc5db8621103f6a44b2fd39e6a20" Mar 08 00:48:06 crc kubenswrapper[4762]: I0308 00:48:06.933236 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.003798 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.017507 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.031859 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:07 crc kubenswrapper[4762]: E0308 00:48:07.032961 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d06066b-71da-4572-86bd-d6958ec35438" containerName="oc" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.033162 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d06066b-71da-4572-86bd-d6958ec35438" containerName="oc" Mar 08 00:48:07 crc kubenswrapper[4762]: E0308 00:48:07.033332 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af53155-9537-4534-a24f-06d043ca1cef" containerName="nova-scheduler-scheduler" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.033491 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af53155-9537-4534-a24f-06d043ca1cef" containerName="nova-scheduler-scheduler" Mar 08 00:48:07 crc kubenswrapper[4762]: E0308 00:48:07.033631 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03cb9dfe-119e-4bce-808a-375258d654d7" containerName="nova-manage" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.033753 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="03cb9dfe-119e-4bce-808a-375258d654d7" containerName="nova-manage" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.034381 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="03cb9dfe-119e-4bce-808a-375258d654d7" containerName="nova-manage" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.034565 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d06066b-71da-4572-86bd-d6958ec35438" containerName="oc" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.034700 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af53155-9537-4534-a24f-06d043ca1cef" containerName="nova-scheduler-scheduler" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.036049 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.041960 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.046818 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.182598 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64b5\" (UniqueName: \"kubernetes.io/projected/7b9f29ab-520d-47cb-85dc-cd128b475b2a-kube-api-access-t64b5\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.183439 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-config-data\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.183680 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.252846 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": read tcp 10.217.0.2:36144->10.217.0.240:8775: read: connection reset by peer" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.253396 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.240:8775/\": read tcp 10.217.0.2:36132->10.217.0.240:8775: read: connection reset by peer" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.285810 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af53155-9537-4534-a24f-06d043ca1cef" path="/var/lib/kubelet/pods/5af53155-9537-4534-a24f-06d043ca1cef/volumes" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.287721 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-config-data\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.287935 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.288024 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64b5\" (UniqueName: \"kubernetes.io/projected/7b9f29ab-520d-47cb-85dc-cd128b475b2a-kube-api-access-t64b5\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.289492 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d54f80-7455-4a4b-8d9e-b5e24de88ed5" path="/var/lib/kubelet/pods/88d54f80-7455-4a4b-8d9e-b5e24de88ed5/volumes" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.310838 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-config-data\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.311090 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b9f29ab-520d-47cb-85dc-cd128b475b2a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.337652 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64b5\" (UniqueName: \"kubernetes.io/projected/7b9f29ab-520d-47cb-85dc-cd128b475b2a-kube-api-access-t64b5\") pod \"nova-scheduler-0\" (UID: \"7b9f29ab-520d-47cb-85dc-cd128b475b2a\") " pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.382867 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 08 00:48:07 crc kubenswrapper[4762]: E0308 00:48:07.556166 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68df3ba7_dbb7_442b_a420_984272ca19e7.slice/crio-conmon-d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225.scope\": RecentStats: unable to find data in memory cache]" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.776020 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.898848 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.902841 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle\") pod \"68df3ba7-dbb7-442b-a420-984272ca19e7\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.903104 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7dm4\" (UniqueName: \"kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4\") pod \"68df3ba7-dbb7-442b-a420-984272ca19e7\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.903215 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs\") pod \"68df3ba7-dbb7-442b-a420-984272ca19e7\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.903442 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data\") pod \"68df3ba7-dbb7-442b-a420-984272ca19e7\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.903544 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs\") pod \"68df3ba7-dbb7-442b-a420-984272ca19e7\" (UID: \"68df3ba7-dbb7-442b-a420-984272ca19e7\") " Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.904330 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs" (OuterVolumeSpecName: "logs") pod "68df3ba7-dbb7-442b-a420-984272ca19e7" (UID: "68df3ba7-dbb7-442b-a420-984272ca19e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.909442 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4" (OuterVolumeSpecName: "kube-api-access-p7dm4") pod "68df3ba7-dbb7-442b-a420-984272ca19e7" (UID: "68df3ba7-dbb7-442b-a420-984272ca19e7"). InnerVolumeSpecName "kube-api-access-p7dm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.946205 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "68df3ba7-dbb7-442b-a420-984272ca19e7" (UID: "68df3ba7-dbb7-442b-a420-984272ca19e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.961247 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data" (OuterVolumeSpecName: "config-data") pod "68df3ba7-dbb7-442b-a420-984272ca19e7" (UID: "68df3ba7-dbb7-442b-a420-984272ca19e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.963570 4762 generic.go:334] "Generic (PLEG): container finished" podID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerID="d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225" exitCode=0 Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.963693 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerDied","Data":"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225"} Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.963727 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"68df3ba7-dbb7-442b-a420-984272ca19e7","Type":"ContainerDied","Data":"a79e5d2c8beaa1f3a4e13957c3d09f650b67c54b0a4c0d4a46f01dd347f16ec5"} Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.963748 4762 scope.go:117] "RemoveContainer" containerID="d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.963981 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.967115 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b9f29ab-520d-47cb-85dc-cd128b475b2a","Type":"ContainerStarted","Data":"55175d68ba75589c7b4ac0eff947577405004b16b20d18bb0353a6c8a99fed33"} Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.968226 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "68df3ba7-dbb7-442b-a420-984272ca19e7" (UID: "68df3ba7-dbb7-442b-a420-984272ca19e7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:07 crc kubenswrapper[4762]: I0308 00:48:07.992247 4762 scope.go:117] "RemoveContainer" containerID="86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.005519 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.005565 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/68df3ba7-dbb7-442b-a420-984272ca19e7-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.005577 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.005592 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7dm4\" (UniqueName: \"kubernetes.io/projected/68df3ba7-dbb7-442b-a420-984272ca19e7-kube-api-access-p7dm4\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.005605 4762 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/68df3ba7-dbb7-442b-a420-984272ca19e7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.018792 4762 scope.go:117] "RemoveContainer" containerID="d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225" Mar 08 00:48:08 crc kubenswrapper[4762]: E0308 00:48:08.019380 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225\": container with ID starting with d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225 not found: ID does not exist" containerID="d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.019420 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225"} err="failed to get container status \"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225\": rpc error: code = NotFound desc = could not find container \"d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225\": container with ID starting with d0589d413ddac4ad92b8ba8bfbe7a51da93140d2860cebcf733b3aeae36f9225 not found: ID does not exist" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.019447 4762 scope.go:117] "RemoveContainer" containerID="86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492" Mar 08 00:48:08 crc kubenswrapper[4762]: E0308 00:48:08.019914 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492\": container with ID starting with 86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492 not found: ID does not exist" containerID="86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.019944 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492"} err="failed to get container status \"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492\": rpc error: code = NotFound desc = could not find container \"86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492\": container with ID starting with 86f15442a4e4654f43089fbd70ee0cf1e156ca6f13c0faf67910ebc1622e9492 not found: ID does not exist" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.309911 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.319274 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.338992 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:08 crc kubenswrapper[4762]: E0308 00:48:08.339577 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.339592 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" Mar 08 00:48:08 crc kubenswrapper[4762]: E0308 00:48:08.339608 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.339614 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.339804 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-metadata" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.339813 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" containerName="nova-metadata-log" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.340832 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.343596 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.343907 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.367930 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.524937 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.524978 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.525013 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4zzf\" (UniqueName: \"kubernetes.io/projected/f3231a23-d920-4cf2-b78f-65ecf0d67c77-kube-api-access-q4zzf\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.525031 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-config-data\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.525352 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3231a23-d920-4cf2-b78f-65ecf0d67c77-logs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.626974 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.627282 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.627317 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4zzf\" (UniqueName: \"kubernetes.io/projected/f3231a23-d920-4cf2-b78f-65ecf0d67c77-kube-api-access-q4zzf\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.627335 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-config-data\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.627418 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3231a23-d920-4cf2-b78f-65ecf0d67c77-logs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.628079 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3231a23-d920-4cf2-b78f-65ecf0d67c77-logs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.632472 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-config-data\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.632898 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.644455 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3231a23-d920-4cf2-b78f-65ecf0d67c77-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.648036 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4zzf\" (UniqueName: \"kubernetes.io/projected/f3231a23-d920-4cf2-b78f-65ecf0d67c77-kube-api-access-q4zzf\") pod \"nova-metadata-0\" (UID: \"f3231a23-d920-4cf2-b78f-65ecf0d67c77\") " pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.723733 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.944691 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.982322 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b9f29ab-520d-47cb-85dc-cd128b475b2a","Type":"ContainerStarted","Data":"d4fe7819d1f34f8a4ba92a8e8364d66db6e09c7344a62749818b5bc19cf5294c"} Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.991221 4762 generic.go:334] "Generic (PLEG): container finished" podID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerID="c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8" exitCode=0 Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.991255 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerDied","Data":"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8"} Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.991278 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11be03d4-ebb5-41c4-be5f-7fab6a7659e3","Type":"ContainerDied","Data":"eec7a503bcdd560540a51008decd7f390f5f38cc227e6802a47eca300ef4223e"} Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.991294 4762 scope.go:117] "RemoveContainer" containerID="c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8" Mar 08 00:48:08 crc kubenswrapper[4762]: I0308 00:48:08.991410 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.001297 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.001280784 podStartE2EDuration="3.001280784s" podCreationTimestamp="2026-03-08 00:48:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:48:08.994839597 +0000 UTC m=+1510.468983941" watchObservedRunningTime="2026-03-08 00:48:09.001280784 +0000 UTC m=+1510.475425128" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.018360 4762 scope.go:117] "RemoveContainer" containerID="8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.047957 4762 scope.go:117] "RemoveContainer" containerID="c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8" Mar 08 00:48:09 crc kubenswrapper[4762]: E0308 00:48:09.048281 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8\": container with ID starting with c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8 not found: ID does not exist" containerID="c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.048322 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8"} err="failed to get container status \"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8\": rpc error: code = NotFound desc = could not find container \"c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8\": container with ID starting with c60b55fca9b93adda1a46e3f06dfa44cd87ca376e1671b1b9dd71422ec000ea8 not found: ID does not exist" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.048347 4762 scope.go:117] "RemoveContainer" containerID="8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f" Mar 08 00:48:09 crc kubenswrapper[4762]: E0308 00:48:09.048735 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f\": container with ID starting with 8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f not found: ID does not exist" containerID="8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.048775 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f"} err="failed to get container status \"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f\": rpc error: code = NotFound desc = could not find container \"8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f\": container with ID starting with 8bc02e7a3267263e7a595a4db067879dd1266ea449fa5c250a66ff45ae37235f not found: ID does not exist" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.135839 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfnqf\" (UniqueName: \"kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.135972 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.136070 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.136157 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.136219 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.136260 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs\") pod \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\" (UID: \"11be03d4-ebb5-41c4-be5f-7fab6a7659e3\") " Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.141387 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs" (OuterVolumeSpecName: "logs") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.143798 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf" (OuterVolumeSpecName: "kube-api-access-lfnqf") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "kube-api-access-lfnqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.166260 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data" (OuterVolumeSpecName: "config-data") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.170749 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.207040 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.233122 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "11be03d4-ebb5-41c4-be5f-7fab6a7659e3" (UID: "11be03d4-ebb5-41c4-be5f-7fab6a7659e3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238478 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-logs\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238509 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238520 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238534 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238544 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfnqf\" (UniqueName: \"kubernetes.io/projected/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-kube-api-access-lfnqf\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.238553 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11be03d4-ebb5-41c4-be5f-7fab6a7659e3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.255100 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.296618 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68df3ba7-dbb7-442b-a420-984272ca19e7" path="/var/lib/kubelet/pods/68df3ba7-dbb7-442b-a420-984272ca19e7/volumes" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.363176 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.379835 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.388837 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:09 crc kubenswrapper[4762]: E0308 00:48:09.389697 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-api" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.389716 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-api" Mar 08 00:48:09 crc kubenswrapper[4762]: E0308 00:48:09.389768 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-log" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.389775 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-log" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.389980 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-log" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.390000 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" containerName="nova-api-api" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.391478 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.394047 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.394275 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.394562 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.400471 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545065 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545272 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qblc7\" (UniqueName: \"kubernetes.io/projected/312fe1d6-7a03-4cb5-8675-3863ce774c6f-kube-api-access-qblc7\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545635 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545739 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312fe1d6-7a03-4cb5-8675-3863ce774c6f-logs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.545822 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-config-data\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.649988 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.650068 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.650158 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312fe1d6-7a03-4cb5-8675-3863ce774c6f-logs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.650311 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-config-data\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.651277 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/312fe1d6-7a03-4cb5-8675-3863ce774c6f-logs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.651409 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.652233 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qblc7\" (UniqueName: \"kubernetes.io/projected/312fe1d6-7a03-4cb5-8675-3863ce774c6f-kube-api-access-qblc7\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.653988 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.656980 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-public-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.662554 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.663627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/312fe1d6-7a03-4cb5-8675-3863ce774c6f-config-data\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.667148 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qblc7\" (UniqueName: \"kubernetes.io/projected/312fe1d6-7a03-4cb5-8675-3863ce774c6f-kube-api-access-qblc7\") pod \"nova-api-0\" (UID: \"312fe1d6-7a03-4cb5-8675-3863ce774c6f\") " pod="openstack/nova-api-0" Mar 08 00:48:09 crc kubenswrapper[4762]: I0308 00:48:09.787805 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 08 00:48:10 crc kubenswrapper[4762]: I0308 00:48:10.012089 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3231a23-d920-4cf2-b78f-65ecf0d67c77","Type":"ContainerStarted","Data":"5c2de8b6bd064510e28ae81c133937a8e0ba2147f5d8c6aa7a50ef088ae7e5cb"} Mar 08 00:48:10 crc kubenswrapper[4762]: I0308 00:48:10.012446 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3231a23-d920-4cf2-b78f-65ecf0d67c77","Type":"ContainerStarted","Data":"d01cf2f577691b767002102e0a03418d874a1394b819a436f9df4e5f6320467a"} Mar 08 00:48:10 crc kubenswrapper[4762]: I0308 00:48:10.012480 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f3231a23-d920-4cf2-b78f-65ecf0d67c77","Type":"ContainerStarted","Data":"b90bbd8d37e8326fdef8eec3771693839f7230a82f087807bb68daed209952e4"} Mar 08 00:48:10 crc kubenswrapper[4762]: I0308 00:48:10.043529 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.043508579 podStartE2EDuration="2.043508579s" podCreationTimestamp="2026-03-08 00:48:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:48:10.032746341 +0000 UTC m=+1511.506890685" watchObservedRunningTime="2026-03-08 00:48:10.043508579 +0000 UTC m=+1511.517652923" Mar 08 00:48:10 crc kubenswrapper[4762]: I0308 00:48:10.307344 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 08 00:48:10 crc kubenswrapper[4762]: W0308 00:48:10.312539 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod312fe1d6_7a03_4cb5_8675_3863ce774c6f.slice/crio-869a2141a6d4a0313d52b2dc5c5cae1f008da20bd8a1f7dd67abb8e2c9ec4a88 WatchSource:0}: Error finding container 869a2141a6d4a0313d52b2dc5c5cae1f008da20bd8a1f7dd67abb8e2c9ec4a88: Status 404 returned error can't find the container with id 869a2141a6d4a0313d52b2dc5c5cae1f008da20bd8a1f7dd67abb8e2c9ec4a88 Mar 08 00:48:11 crc kubenswrapper[4762]: I0308 00:48:11.032066 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"312fe1d6-7a03-4cb5-8675-3863ce774c6f","Type":"ContainerStarted","Data":"3353198452b982711005bdeead8558d13e7ccd8fc5b7a28b38fadfdf48629c28"} Mar 08 00:48:11 crc kubenswrapper[4762]: I0308 00:48:11.032410 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"312fe1d6-7a03-4cb5-8675-3863ce774c6f","Type":"ContainerStarted","Data":"4f899ff78ff7bed1efaacf7fc32f8bdf8633b38f348df409607ab7736ed05c69"} Mar 08 00:48:11 crc kubenswrapper[4762]: I0308 00:48:11.032435 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"312fe1d6-7a03-4cb5-8675-3863ce774c6f","Type":"ContainerStarted","Data":"869a2141a6d4a0313d52b2dc5c5cae1f008da20bd8a1f7dd67abb8e2c9ec4a88"} Mar 08 00:48:11 crc kubenswrapper[4762]: I0308 00:48:11.063625 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.063600629 podStartE2EDuration="2.063600629s" podCreationTimestamp="2026-03-08 00:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:48:11.055658287 +0000 UTC m=+1512.529802641" watchObservedRunningTime="2026-03-08 00:48:11.063600629 +0000 UTC m=+1512.537744983" Mar 08 00:48:11 crc kubenswrapper[4762]: I0308 00:48:11.298593 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11be03d4-ebb5-41c4-be5f-7fab6a7659e3" path="/var/lib/kubelet/pods/11be03d4-ebb5-41c4-be5f-7fab6a7659e3/volumes" Mar 08 00:48:12 crc kubenswrapper[4762]: I0308 00:48:12.383290 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 08 00:48:12 crc kubenswrapper[4762]: I0308 00:48:12.852043 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:48:12 crc kubenswrapper[4762]: I0308 00:48:12.852099 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:48:13 crc kubenswrapper[4762]: I0308 00:48:13.724697 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 08 00:48:13 crc kubenswrapper[4762]: I0308 00:48:13.724743 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.075510 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.118898 4762 generic.go:334] "Generic (PLEG): container finished" podID="478f1074-2af5-4d4a-b503-9e4418586e31" containerID="6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9" exitCode=137 Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.118949 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerDied","Data":"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9"} Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.118981 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"478f1074-2af5-4d4a-b503-9e4418586e31","Type":"ContainerDied","Data":"a7d163abbc390aa4e1604f067c484c47789e1d4935e99880e35b0559c241cffa"} Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.119000 4762 scope.go:117] "RemoveContainer" containerID="6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.119051 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.157427 4762 scope.go:117] "RemoveContainer" containerID="e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.190160 4762 scope.go:117] "RemoveContainer" containerID="626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.216961 4762 scope.go:117] "RemoveContainer" containerID="d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.231711 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle\") pod \"478f1074-2af5-4d4a-b503-9e4418586e31\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.231791 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data\") pod \"478f1074-2af5-4d4a-b503-9e4418586e31\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.231931 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfd8w\" (UniqueName: \"kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w\") pod \"478f1074-2af5-4d4a-b503-9e4418586e31\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.232024 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts\") pod \"478f1074-2af5-4d4a-b503-9e4418586e31\" (UID: \"478f1074-2af5-4d4a-b503-9e4418586e31\") " Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.242713 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w" (OuterVolumeSpecName: "kube-api-access-jfd8w") pod "478f1074-2af5-4d4a-b503-9e4418586e31" (UID: "478f1074-2af5-4d4a-b503-9e4418586e31"). InnerVolumeSpecName "kube-api-access-jfd8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.251195 4762 scope.go:117] "RemoveContainer" containerID="6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.251430 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts" (OuterVolumeSpecName: "scripts") pod "478f1074-2af5-4d4a-b503-9e4418586e31" (UID: "478f1074-2af5-4d4a-b503-9e4418586e31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.260791 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9\": container with ID starting with 6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9 not found: ID does not exist" containerID="6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.260846 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9"} err="failed to get container status \"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9\": rpc error: code = NotFound desc = could not find container \"6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9\": container with ID starting with 6246521e3de04b0dc1c9c95d41f131d53886bd783dbd52e1acf6e94fe75647d9 not found: ID does not exist" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.260883 4762 scope.go:117] "RemoveContainer" containerID="e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.261311 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274\": container with ID starting with e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274 not found: ID does not exist" containerID="e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.261339 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274"} err="failed to get container status \"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274\": rpc error: code = NotFound desc = could not find container \"e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274\": container with ID starting with e93ac4285eb08836e80c25e7e36064d3508a5f3434b90e50af31852150158274 not found: ID does not exist" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.261356 4762 scope.go:117] "RemoveContainer" containerID="626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.261605 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111\": container with ID starting with 626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111 not found: ID does not exist" containerID="626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.261632 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111"} err="failed to get container status \"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111\": rpc error: code = NotFound desc = could not find container \"626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111\": container with ID starting with 626e0da3ddd506a08ca679a7d0a12b07a98f9b3f605af4def96986bcf242e111 not found: ID does not exist" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.261652 4762 scope.go:117] "RemoveContainer" containerID="d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.263848 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4\": container with ID starting with d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4 not found: ID does not exist" containerID="d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.263880 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4"} err="failed to get container status \"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4\": rpc error: code = NotFound desc = could not find container \"d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4\": container with ID starting with d9ea6f6ed1de2e41cd66ad1329ed2769034d80b2ae903d6f1396ebac862908b4 not found: ID does not exist" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.334615 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfd8w\" (UniqueName: \"kubernetes.io/projected/478f1074-2af5-4d4a-b503-9e4418586e31-kube-api-access-jfd8w\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.334682 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.383121 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.384752 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data" (OuterVolumeSpecName: "config-data") pod "478f1074-2af5-4d4a-b503-9e4418586e31" (UID: "478f1074-2af5-4d4a-b503-9e4418586e31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.387692 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "478f1074-2af5-4d4a-b503-9e4418586e31" (UID: "478f1074-2af5-4d4a-b503-9e4418586e31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.415240 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.436925 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.437001 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478f1074-2af5-4d4a-b503-9e4418586e31-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.464712 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.474301 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.486438 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.487036 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-listener" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487103 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-listener" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.487161 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-notifier" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487209 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-notifier" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.487367 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-api" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487418 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-api" Mar 08 00:48:17 crc kubenswrapper[4762]: E0308 00:48:17.487477 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-evaluator" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487524 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-evaluator" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487778 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-listener" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487839 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-evaluator" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.487903 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-notifier" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.488014 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" containerName="aodh-api" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.490812 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.499278 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.499333 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kqtwz" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.499375 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.499389 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.501116 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.513586 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655065 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655374 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggdqd\" (UniqueName: \"kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655505 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655600 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655781 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.655895 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.697331 4762 scope.go:117] "RemoveContainer" containerID="8d668b72a09e2c58c81f0e42f8e5a2eef83c846a75599e8a85d6116c3d164ebe" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.758714 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.758990 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggdqd\" (UniqueName: \"kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.759077 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.759235 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.759344 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.759638 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.764104 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.765650 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.769228 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.769303 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.774842 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggdqd\" (UniqueName: \"kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.777293 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data\") pod \"aodh-0\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " pod="openstack/aodh-0" Mar 08 00:48:17 crc kubenswrapper[4762]: I0308 00:48:17.851250 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:48:18 crc kubenswrapper[4762]: I0308 00:48:18.176121 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 08 00:48:18 crc kubenswrapper[4762]: I0308 00:48:18.444920 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:48:18 crc kubenswrapper[4762]: I0308 00:48:18.448246 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:48:18 crc kubenswrapper[4762]: I0308 00:48:18.724752 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 08 00:48:18 crc kubenswrapper[4762]: I0308 00:48:18.725053 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.149114 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerStarted","Data":"64199a12e6195bd56edf4c910e497f0eb8ac34066ebf37ff2d114f36827dca07"} Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.280193 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478f1074-2af5-4d4a-b503-9e4418586e31" path="/var/lib/kubelet/pods/478f1074-2af5-4d4a-b503-9e4418586e31/volumes" Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.739887 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f3231a23-d920-4cf2-b78f-65ecf0d67c77" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.740158 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="f3231a23-d920-4cf2-b78f-65ecf0d67c77" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.254:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.788453 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:48:19 crc kubenswrapper[4762]: I0308 00:48:19.791784 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 08 00:48:20 crc kubenswrapper[4762]: I0308 00:48:20.170424 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerStarted","Data":"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593"} Mar 08 00:48:20 crc kubenswrapper[4762]: I0308 00:48:20.170882 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerStarted","Data":"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c"} Mar 08 00:48:20 crc kubenswrapper[4762]: I0308 00:48:20.798983 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="312fe1d6-7a03-4cb5-8675-3863ce774c6f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:20 crc kubenswrapper[4762]: I0308 00:48:20.798998 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="312fe1d6-7a03-4cb5-8675-3863ce774c6f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.255:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 00:48:21 crc kubenswrapper[4762]: I0308 00:48:21.189610 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerStarted","Data":"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a"} Mar 08 00:48:22 crc kubenswrapper[4762]: I0308 00:48:22.204417 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerStarted","Data":"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18"} Mar 08 00:48:22 crc kubenswrapper[4762]: I0308 00:48:22.241892 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.698798886 podStartE2EDuration="5.241870198s" podCreationTimestamp="2026-03-08 00:48:17 +0000 UTC" firstStartedPulling="2026-03-08 00:48:18.44466015 +0000 UTC m=+1519.918804494" lastFinishedPulling="2026-03-08 00:48:20.987731462 +0000 UTC m=+1522.461875806" observedRunningTime="2026-03-08 00:48:22.228879142 +0000 UTC m=+1523.703023486" watchObservedRunningTime="2026-03-08 00:48:22.241870198 +0000 UTC m=+1523.716014552" Mar 08 00:48:23 crc kubenswrapper[4762]: I0308 00:48:23.108234 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 08 00:48:27 crc kubenswrapper[4762]: I0308 00:48:27.535543 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:27 crc kubenswrapper[4762]: I0308 00:48:27.537261 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" containerName="kube-state-metrics" containerID="cri-o://d3aa62d5919c60e110b67b582ffbc719a3bee2abee9348dfc2f4b323d33bec97" gracePeriod=30 Mar 08 00:48:28 crc kubenswrapper[4762]: I0308 00:48:28.499512 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:28 crc kubenswrapper[4762]: I0308 00:48:28.499907 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" containerName="mysqld-exporter" containerID="cri-o://7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98" gracePeriod=30 Mar 08 00:48:28 crc kubenswrapper[4762]: I0308 00:48:28.739299 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 08 00:48:28 crc kubenswrapper[4762]: I0308 00:48:28.746565 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 08 00:48:28 crc kubenswrapper[4762]: I0308 00:48:28.746675 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.087271 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.210414 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data\") pod \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.210631 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle\") pod \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.210663 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqskl\" (UniqueName: \"kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl\") pod \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\" (UID: \"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc\") " Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.222565 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl" (OuterVolumeSpecName: "kube-api-access-kqskl") pod "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" (UID: "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc"). InnerVolumeSpecName "kube-api-access-kqskl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.255520 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" (UID: "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.274388 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data" (OuterVolumeSpecName: "config-data") pod "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" (UID: "1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.312689 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqskl\" (UniqueName: \"kubernetes.io/projected/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-kube-api-access-kqskl\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.312748 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.312781 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.334461 4762 generic.go:334] "Generic (PLEG): container finished" podID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" containerID="d3aa62d5919c60e110b67b582ffbc719a3bee2abee9348dfc2f4b323d33bec97" exitCode=2 Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.334561 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24ad8db6-2015-4ebf-847c-64a8c4a548d3","Type":"ContainerDied","Data":"d3aa62d5919c60e110b67b582ffbc719a3bee2abee9348dfc2f4b323d33bec97"} Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.334616 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"24ad8db6-2015-4ebf-847c-64a8c4a548d3","Type":"ContainerDied","Data":"b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4"} Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.334631 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3cb9627523d2dfc3b8a04caa195dcca40d2cf5f99b8987c1f41485fc079b6c4" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.337628 4762 generic.go:334] "Generic (PLEG): container finished" podID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" containerID="7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98" exitCode=2 Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.337721 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc","Type":"ContainerDied","Data":"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98"} Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.337773 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc","Type":"ContainerDied","Data":"be06e6457d482e4ce6c1954769aaffaf16f4e498cc8a95ba4acb35c349d97e05"} Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.337776 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.337791 4762 scope.go:117] "RemoveContainer" containerID="7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.348151 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.357570 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.370024 4762 scope.go:117] "RemoveContainer" containerID="7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98" Mar 08 00:48:29 crc kubenswrapper[4762]: E0308 00:48:29.370449 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98\": container with ID starting with 7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98 not found: ID does not exist" containerID="7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.370496 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98"} err="failed to get container status \"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98\": rpc error: code = NotFound desc = could not find container \"7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98\": container with ID starting with 7715462ec62888e002bcec09602491f7fc77cae21ff29eef192978b1b3a23d98 not found: ID does not exist" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.439666 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.455512 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.464351 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:29 crc kubenswrapper[4762]: E0308 00:48:29.464739 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" containerName="kube-state-metrics" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.464755 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" containerName="kube-state-metrics" Mar 08 00:48:29 crc kubenswrapper[4762]: E0308 00:48:29.464795 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" containerName="mysqld-exporter" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.464802 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" containerName="mysqld-exporter" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.464984 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" containerName="mysqld-exporter" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.465042 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" containerName="kube-state-metrics" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.465684 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.472392 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.472400 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.483883 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.522171 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdwmj\" (UniqueName: \"kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj\") pod \"24ad8db6-2015-4ebf-847c-64a8c4a548d3\" (UID: \"24ad8db6-2015-4ebf-847c-64a8c4a548d3\") " Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.527090 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj" (OuterVolumeSpecName: "kube-api-access-pdwmj") pod "24ad8db6-2015-4ebf-847c-64a8c4a548d3" (UID: "24ad8db6-2015-4ebf-847c-64a8c4a548d3"). InnerVolumeSpecName "kube-api-access-pdwmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.624478 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.624707 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6vt7\" (UniqueName: \"kubernetes.io/projected/231f2489-76c6-4bba-92aa-65f049a666de-kube-api-access-c6vt7\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.625000 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.625321 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-config-data\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.626060 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdwmj\" (UniqueName: \"kubernetes.io/projected/24ad8db6-2015-4ebf-847c-64a8c4a548d3-kube-api-access-pdwmj\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.728563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.728711 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6vt7\" (UniqueName: \"kubernetes.io/projected/231f2489-76c6-4bba-92aa-65f049a666de-kube-api-access-c6vt7\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.728841 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.729056 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-config-data\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.732493 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.733316 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.733723 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/231f2489-76c6-4bba-92aa-65f049a666de-config-data\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.750094 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6vt7\" (UniqueName: \"kubernetes.io/projected/231f2489-76c6-4bba-92aa-65f049a666de-kube-api-access-c6vt7\") pod \"mysqld-exporter-0\" (UID: \"231f2489-76c6-4bba-92aa-65f049a666de\") " pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.788667 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.796340 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.796945 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.803593 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 08 00:48:29 crc kubenswrapper[4762]: I0308 00:48:29.807486 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.277214 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: W0308 00:48:30.283196 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod231f2489_76c6_4bba_92aa_65f049a666de.slice/crio-52113eae41fba7a04130f8ed98d65e57c15801c730c893b80c8fdfdd9ba147ec WatchSource:0}: Error finding container 52113eae41fba7a04130f8ed98d65e57c15801c730c893b80c8fdfdd9ba147ec: Status 404 returned error can't find the container with id 52113eae41fba7a04130f8ed98d65e57c15801c730c893b80c8fdfdd9ba147ec Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.352442 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"231f2489-76c6-4bba-92aa-65f049a666de","Type":"ContainerStarted","Data":"52113eae41fba7a04130f8ed98d65e57c15801c730c893b80c8fdfdd9ba147ec"} Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.355010 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.355296 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.367723 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.439447 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.461583 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.479853 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.482465 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.488697 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.488739 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.513896 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.547466 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jnrx\" (UniqueName: \"kubernetes.io/projected/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-api-access-2jnrx\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.547536 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.547625 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.548104 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.650939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jnrx\" (UniqueName: \"kubernetes.io/projected/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-api-access-2jnrx\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.651346 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.651494 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.651720 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.657088 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.657839 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.663105 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.666618 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jnrx\" (UniqueName: \"kubernetes.io/projected/b14f9065-ffe7-430a-b9e9-f62ce942558e-kube-api-access-2jnrx\") pod \"kube-state-metrics-0\" (UID: \"b14f9065-ffe7-430a-b9e9-f62ce942558e\") " pod="openstack/kube-state-metrics-0" Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.717378 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.717707 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="proxy-httpd" containerID="cri-o://b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e" gracePeriod=30 Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.717780 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-notification-agent" containerID="cri-o://c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d" gracePeriod=30 Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.717916 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="sg-core" containerID="cri-o://badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a" gracePeriod=30 Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.718034 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-central-agent" containerID="cri-o://19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65" gracePeriod=30 Mar 08 00:48:30 crc kubenswrapper[4762]: I0308 00:48:30.814890 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.312114 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc" path="/var/lib/kubelet/pods/1fde1aa0-5d54-46e8-b45e-bd9f3bfa1bfc/volumes" Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.313880 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ad8db6-2015-4ebf-847c-64a8c4a548d3" path="/var/lib/kubelet/pods/24ad8db6-2015-4ebf-847c-64a8c4a548d3/volumes" Mar 08 00:48:31 crc kubenswrapper[4762]: W0308 00:48:31.316629 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb14f9065_ffe7_430a_b9e9_f62ce942558e.slice/crio-b3321b65053809097f9357b88cab61e6c611e542e60ff6766071abea8dd8bd03 WatchSource:0}: Error finding container b3321b65053809097f9357b88cab61e6c611e542e60ff6766071abea8dd8bd03: Status 404 returned error can't find the container with id b3321b65053809097f9357b88cab61e6c611e542e60ff6766071abea8dd8bd03 Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.321381 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.363624 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b14f9065-ffe7-430a-b9e9-f62ce942558e","Type":"ContainerStarted","Data":"b3321b65053809097f9357b88cab61e6c611e542e60ff6766071abea8dd8bd03"} Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.368867 4762 generic.go:334] "Generic (PLEG): container finished" podID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerID="b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e" exitCode=0 Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.368896 4762 generic.go:334] "Generic (PLEG): container finished" podID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerID="badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a" exitCode=2 Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.368905 4762 generic.go:334] "Generic (PLEG): container finished" podID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerID="19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65" exitCode=0 Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.368986 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerDied","Data":"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e"} Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.369011 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerDied","Data":"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a"} Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.369021 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerDied","Data":"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65"} Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.371569 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"231f2489-76c6-4bba-92aa-65f049a666de","Type":"ContainerStarted","Data":"5146d4ac525498866ea78d0ece749a64c7f0a178b9074249392b453fe7e53320"} Mar 08 00:48:31 crc kubenswrapper[4762]: I0308 00:48:31.416044 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=1.626824124 podStartE2EDuration="2.416023875s" podCreationTimestamp="2026-03-08 00:48:29 +0000 UTC" firstStartedPulling="2026-03-08 00:48:30.28785395 +0000 UTC m=+1531.761998294" lastFinishedPulling="2026-03-08 00:48:31.077053701 +0000 UTC m=+1532.551198045" observedRunningTime="2026-03-08 00:48:31.387515306 +0000 UTC m=+1532.861659660" watchObservedRunningTime="2026-03-08 00:48:31.416023875 +0000 UTC m=+1532.890168219" Mar 08 00:48:32 crc kubenswrapper[4762]: I0308 00:48:32.382597 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b14f9065-ffe7-430a-b9e9-f62ce942558e","Type":"ContainerStarted","Data":"8f39958cd08959f9d964ed31a6d3fab2c70746797b9bfe664f9985cdcbd819fe"} Mar 08 00:48:32 crc kubenswrapper[4762]: I0308 00:48:32.407871 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.8661759999999998 podStartE2EDuration="2.407847963s" podCreationTimestamp="2026-03-08 00:48:30 +0000 UTC" firstStartedPulling="2026-03-08 00:48:31.319573235 +0000 UTC m=+1532.793717579" lastFinishedPulling="2026-03-08 00:48:31.861245158 +0000 UTC m=+1533.335389542" observedRunningTime="2026-03-08 00:48:32.400713216 +0000 UTC m=+1533.874857570" watchObservedRunningTime="2026-03-08 00:48:32.407847963 +0000 UTC m=+1533.881992317" Mar 08 00:48:33 crc kubenswrapper[4762]: I0308 00:48:33.392171 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.052824 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255140 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255580 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255706 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255746 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnznh\" (UniqueName: \"kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255784 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255806 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.255930 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd\") pod \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\" (UID: \"ac8ddec1-5858-4067-a3f2-56162c0e09f1\") " Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.256428 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.256647 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.260129 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts" (OuterVolumeSpecName: "scripts") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.262042 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh" (OuterVolumeSpecName: "kube-api-access-jnznh") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "kube-api-access-jnznh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.299997 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.348836 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.359811 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.360047 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.360140 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnznh\" (UniqueName: \"kubernetes.io/projected/ac8ddec1-5858-4067-a3f2-56162c0e09f1-kube-api-access-jnznh\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.360225 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.360302 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.360380 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac8ddec1-5858-4067-a3f2-56162c0e09f1-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.432315 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data" (OuterVolumeSpecName: "config-data") pod "ac8ddec1-5858-4067-a3f2-56162c0e09f1" (UID: "ac8ddec1-5858-4067-a3f2-56162c0e09f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.461577 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac8ddec1-5858-4067-a3f2-56162c0e09f1-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.466003 4762 generic.go:334] "Generic (PLEG): container finished" podID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerID="c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d" exitCode=0 Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.466230 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerDied","Data":"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d"} Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.466347 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac8ddec1-5858-4067-a3f2-56162c0e09f1","Type":"ContainerDied","Data":"6e050096e3039d8f5a5c9e6605107970f5ce4541bc5b5f8c602a7e659a3b06dc"} Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.466464 4762 scope.go:117] "RemoveContainer" containerID="b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.466514 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.498024 4762 scope.go:117] "RemoveContainer" containerID="badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.507000 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.529306 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.531491 4762 scope.go:117] "RemoveContainer" containerID="c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.544952 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.545529 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="proxy-httpd" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.545597 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="proxy-httpd" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.545658 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-central-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.545708 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-central-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.545828 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-notification-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.545919 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-notification-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.546008 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="sg-core" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.546082 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="sg-core" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.546393 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="sg-core" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.546488 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="proxy-httpd" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.546577 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-central-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.546670 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" containerName="ceilometer-notification-agent" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.549277 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.552343 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.552816 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.553082 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.558350 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562241 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562280 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562307 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562375 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562436 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87k64\" (UniqueName: \"kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562664 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562846 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.562941 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.607031 4762 scope.go:117] "RemoveContainer" containerID="19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.625172 4762 scope.go:117] "RemoveContainer" containerID="b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.625556 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e\": container with ID starting with b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e not found: ID does not exist" containerID="b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.625665 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e"} err="failed to get container status \"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e\": rpc error: code = NotFound desc = could not find container \"b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e\": container with ID starting with b2aa2c74672a1d414d46109bbf9572b51e447c2f72e098ebead6253550d7ad9e not found: ID does not exist" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.625783 4762 scope.go:117] "RemoveContainer" containerID="badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.626378 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a\": container with ID starting with badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a not found: ID does not exist" containerID="badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.626417 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a"} err="failed to get container status \"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a\": rpc error: code = NotFound desc = could not find container \"badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a\": container with ID starting with badb6d4606c8be5d04a5a325cf9f150c36caa6172f8d275533408ed8495ff03a not found: ID does not exist" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.626450 4762 scope.go:117] "RemoveContainer" containerID="c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.626781 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d\": container with ID starting with c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d not found: ID does not exist" containerID="c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.626810 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d"} err="failed to get container status \"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d\": rpc error: code = NotFound desc = could not find container \"c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d\": container with ID starting with c8b5f66562cc6f5a66a4206297258c255ec5023f21a9bca16e0d57b87828ec7d not found: ID does not exist" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.626826 4762 scope.go:117] "RemoveContainer" containerID="19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65" Mar 08 00:48:38 crc kubenswrapper[4762]: E0308 00:48:38.627193 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65\": container with ID starting with 19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65 not found: ID does not exist" containerID="19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.627215 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65"} err="failed to get container status \"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65\": rpc error: code = NotFound desc = could not find container \"19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65\": container with ID starting with 19646fe92e6c1f21a498316c647982ab7041dff36caefbe4cf7dfff034089a65 not found: ID does not exist" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.664456 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.664863 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.664905 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.664954 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.664990 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.665016 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.665040 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.665078 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87k64\" (UniqueName: \"kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.665389 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.665431 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.670128 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.670903 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.670907 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.686699 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87k64\" (UniqueName: \"kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.690356 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.690728 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " pod="openstack/ceilometer-0" Mar 08 00:48:38 crc kubenswrapper[4762]: I0308 00:48:38.905889 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:48:39 crc kubenswrapper[4762]: I0308 00:48:39.277635 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac8ddec1-5858-4067-a3f2-56162c0e09f1" path="/var/lib/kubelet/pods/ac8ddec1-5858-4067-a3f2-56162c0e09f1/volumes" Mar 08 00:48:39 crc kubenswrapper[4762]: I0308 00:48:39.437789 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:48:39 crc kubenswrapper[4762]: I0308 00:48:39.486594 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerStarted","Data":"229f67554a5f6927a78643772ac96525d9feaae884c21c1ca39f030a0cc5b2bb"} Mar 08 00:48:40 crc kubenswrapper[4762]: I0308 00:48:40.522387 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerStarted","Data":"3b31c4e8549e97b665559023fec50ff5e239357d232909b11e852f28b41aff8e"} Mar 08 00:48:40 crc kubenswrapper[4762]: I0308 00:48:40.828390 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 08 00:48:41 crc kubenswrapper[4762]: I0308 00:48:41.535013 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerStarted","Data":"47d80270bf9cb377936c21b36c15bdae005573fe2e91bc9454d084af39f803e6"} Mar 08 00:48:42 crc kubenswrapper[4762]: I0308 00:48:42.548145 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerStarted","Data":"ea885090d468e66ad65b0419df4f851d2e1eb037b3a0ec033847077dedbfafc9"} Mar 08 00:48:42 crc kubenswrapper[4762]: I0308 00:48:42.854268 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:48:42 crc kubenswrapper[4762]: I0308 00:48:42.854337 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:48:44 crc kubenswrapper[4762]: I0308 00:48:44.575261 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerStarted","Data":"1b45f34444a30f81a42a8c3c8e9ac6ebc989d958950e98a6b42054b180fd433f"} Mar 08 00:48:44 crc kubenswrapper[4762]: I0308 00:48:44.576125 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:48:44 crc kubenswrapper[4762]: I0308 00:48:44.611311 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.734105721 podStartE2EDuration="6.611284607s" podCreationTimestamp="2026-03-08 00:48:38 +0000 UTC" firstStartedPulling="2026-03-08 00:48:39.429827007 +0000 UTC m=+1540.903971361" lastFinishedPulling="2026-03-08 00:48:43.307005903 +0000 UTC m=+1544.781150247" observedRunningTime="2026-03-08 00:48:44.605328726 +0000 UTC m=+1546.079473110" watchObservedRunningTime="2026-03-08 00:48:44.611284607 +0000 UTC m=+1546.085428981" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.567959 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.572555 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.581271 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.681723 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.681957 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rk9r\" (UniqueName: \"kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.681996 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.784370 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.784507 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rk9r\" (UniqueName: \"kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.784542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.785182 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.785207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.810652 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rk9r\" (UniqueName: \"kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r\") pod \"community-operators-4rfhh\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:56 crc kubenswrapper[4762]: I0308 00:48:56.898103 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:48:57 crc kubenswrapper[4762]: I0308 00:48:57.450651 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:48:57 crc kubenswrapper[4762]: W0308 00:48:57.455378 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cb2b2f6_4c97_4894_b629_2e77cb1e732e.slice/crio-7d6f3cd8a8effb0f316a8f6270d4eb49814162dcc421da8007eeca51ad8f04da WatchSource:0}: Error finding container 7d6f3cd8a8effb0f316a8f6270d4eb49814162dcc421da8007eeca51ad8f04da: Status 404 returned error can't find the container with id 7d6f3cd8a8effb0f316a8f6270d4eb49814162dcc421da8007eeca51ad8f04da Mar 08 00:48:57 crc kubenswrapper[4762]: I0308 00:48:57.724933 4762 generic.go:334] "Generic (PLEG): container finished" podID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerID="daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0" exitCode=0 Mar 08 00:48:57 crc kubenswrapper[4762]: I0308 00:48:57.724971 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerDied","Data":"daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0"} Mar 08 00:48:57 crc kubenswrapper[4762]: I0308 00:48:57.724994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerStarted","Data":"7d6f3cd8a8effb0f316a8f6270d4eb49814162dcc421da8007eeca51ad8f04da"} Mar 08 00:48:58 crc kubenswrapper[4762]: I0308 00:48:58.743435 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerStarted","Data":"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a"} Mar 08 00:49:00 crc kubenswrapper[4762]: I0308 00:49:00.782163 4762 generic.go:334] "Generic (PLEG): container finished" podID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerID="673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a" exitCode=0 Mar 08 00:49:00 crc kubenswrapper[4762]: I0308 00:49:00.782242 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerDied","Data":"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a"} Mar 08 00:49:01 crc kubenswrapper[4762]: I0308 00:49:01.794564 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerStarted","Data":"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3"} Mar 08 00:49:01 crc kubenswrapper[4762]: I0308 00:49:01.821827 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4rfhh" podStartSLOduration=2.205415331 podStartE2EDuration="5.821810036s" podCreationTimestamp="2026-03-08 00:48:56 +0000 UTC" firstStartedPulling="2026-03-08 00:48:57.726134299 +0000 UTC m=+1559.200278633" lastFinishedPulling="2026-03-08 00:49:01.342528984 +0000 UTC m=+1562.816673338" observedRunningTime="2026-03-08 00:49:01.817984549 +0000 UTC m=+1563.292128943" watchObservedRunningTime="2026-03-08 00:49:01.821810036 +0000 UTC m=+1563.295954380" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.739170 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.741938 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.758881 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.791431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krst9\" (UniqueName: \"kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.791739 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.791814 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.894751 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.894824 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.894873 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krst9\" (UniqueName: \"kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.895661 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.895723 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:04 crc kubenswrapper[4762]: I0308 00:49:04.922510 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krst9\" (UniqueName: \"kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9\") pod \"redhat-marketplace-2ckmx\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:05 crc kubenswrapper[4762]: I0308 00:49:05.072980 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:05 crc kubenswrapper[4762]: I0308 00:49:05.549516 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:05 crc kubenswrapper[4762]: W0308 00:49:05.549698 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1e0d5f4_f07f_480f_95ba_65137e937a0d.slice/crio-29c272908d2ec966803ad5c3c0f98864ba90a91012539c977a9784530c78e724 WatchSource:0}: Error finding container 29c272908d2ec966803ad5c3c0f98864ba90a91012539c977a9784530c78e724: Status 404 returned error can't find the container with id 29c272908d2ec966803ad5c3c0f98864ba90a91012539c977a9784530c78e724 Mar 08 00:49:05 crc kubenswrapper[4762]: I0308 00:49:05.854681 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerID="dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516" exitCode=0 Mar 08 00:49:05 crc kubenswrapper[4762]: I0308 00:49:05.854761 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerDied","Data":"dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516"} Mar 08 00:49:05 crc kubenswrapper[4762]: I0308 00:49:05.855676 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerStarted","Data":"29c272908d2ec966803ad5c3c0f98864ba90a91012539c977a9784530c78e724"} Mar 08 00:49:06 crc kubenswrapper[4762]: I0308 00:49:06.866997 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerStarted","Data":"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850"} Mar 08 00:49:06 crc kubenswrapper[4762]: I0308 00:49:06.898362 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:06 crc kubenswrapper[4762]: I0308 00:49:06.898628 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:07 crc kubenswrapper[4762]: I0308 00:49:07.880302 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerID="aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850" exitCode=0 Mar 08 00:49:07 crc kubenswrapper[4762]: I0308 00:49:07.881219 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerDied","Data":"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850"} Mar 08 00:49:07 crc kubenswrapper[4762]: I0308 00:49:07.948487 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4rfhh" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="registry-server" probeResult="failure" output=< Mar 08 00:49:07 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:49:07 crc kubenswrapper[4762]: > Mar 08 00:49:08 crc kubenswrapper[4762]: I0308 00:49:08.892392 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerStarted","Data":"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a"} Mar 08 00:49:08 crc kubenswrapper[4762]: I0308 00:49:08.911846 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ckmx" podStartSLOduration=2.476608368 podStartE2EDuration="4.911825093s" podCreationTimestamp="2026-03-08 00:49:04 +0000 UTC" firstStartedPulling="2026-03-08 00:49:05.859151404 +0000 UTC m=+1567.333295758" lastFinishedPulling="2026-03-08 00:49:08.294368099 +0000 UTC m=+1569.768512483" observedRunningTime="2026-03-08 00:49:08.911179973 +0000 UTC m=+1570.385324317" watchObservedRunningTime="2026-03-08 00:49:08.911825093 +0000 UTC m=+1570.385969437" Mar 08 00:49:08 crc kubenswrapper[4762]: I0308 00:49:08.921567 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 08 00:49:12 crc kubenswrapper[4762]: I0308 00:49:12.852234 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:49:12 crc kubenswrapper[4762]: I0308 00:49:12.852830 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:49:12 crc kubenswrapper[4762]: I0308 00:49:12.852876 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:49:12 crc kubenswrapper[4762]: I0308 00:49:12.854305 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:49:12 crc kubenswrapper[4762]: I0308 00:49:12.854366 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15" gracePeriod=600 Mar 08 00:49:13 crc kubenswrapper[4762]: I0308 00:49:13.961521 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15" exitCode=0 Mar 08 00:49:13 crc kubenswrapper[4762]: I0308 00:49:13.961584 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15"} Mar 08 00:49:13 crc kubenswrapper[4762]: I0308 00:49:13.961841 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac"} Mar 08 00:49:13 crc kubenswrapper[4762]: I0308 00:49:13.961861 4762 scope.go:117] "RemoveContainer" containerID="94fdabdefc94b9566cc477b9dd53129703a711b224f05ccb5a6e2de2ee8a0c6d" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.623065 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.625527 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.644238 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.796101 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.796198 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.796336 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkqrn\" (UniqueName: \"kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.899154 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.899289 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.899395 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkqrn\" (UniqueName: \"kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.900037 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.900209 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.922695 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkqrn\" (UniqueName: \"kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn\") pod \"certified-operators-j9r4z\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:14 crc kubenswrapper[4762]: I0308 00:49:14.950575 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.073339 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.076924 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.189726 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.503000 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:15 crc kubenswrapper[4762]: W0308 00:49:15.511419 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod469bdd06_dc45_43a3_9537_51dca073b9b0.slice/crio-6f67734d30a71f19c68e22bb1ef04b12b90afb9be21271fb0615521e7a04dec4 WatchSource:0}: Error finding container 6f67734d30a71f19c68e22bb1ef04b12b90afb9be21271fb0615521e7a04dec4: Status 404 returned error can't find the container with id 6f67734d30a71f19c68e22bb1ef04b12b90afb9be21271fb0615521e7a04dec4 Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.989466 4762 generic.go:334] "Generic (PLEG): container finished" podID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerID="1d15812b57943ec9af2c22c7cf035bfc7f080ab636c2473d6263f1899db3cc25" exitCode=0 Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.989516 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerDied","Data":"1d15812b57943ec9af2c22c7cf035bfc7f080ab636c2473d6263f1899db3cc25"} Mar 08 00:49:15 crc kubenswrapper[4762]: I0308 00:49:15.989870 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerStarted","Data":"6f67734d30a71f19c68e22bb1ef04b12b90afb9be21271fb0615521e7a04dec4"} Mar 08 00:49:16 crc kubenswrapper[4762]: I0308 00:49:16.044601 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:16 crc kubenswrapper[4762]: I0308 00:49:16.949748 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:17 crc kubenswrapper[4762]: I0308 00:49:17.004884 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerStarted","Data":"3474f38faca4fc31ef89dda2088ef64b678a43328d54d799b1ada1618beca615"} Mar 08 00:49:17 crc kubenswrapper[4762]: I0308 00:49:17.015748 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:17 crc kubenswrapper[4762]: I0308 00:49:17.624063 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.046392 4762 generic.go:334] "Generic (PLEG): container finished" podID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerID="3474f38faca4fc31ef89dda2088ef64b678a43328d54d799b1ada1618beca615" exitCode=0 Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.046523 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerDied","Data":"3474f38faca4fc31ef89dda2088ef64b678a43328d54d799b1ada1618beca615"} Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.047128 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ckmx" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="registry-server" containerID="cri-o://9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a" gracePeriod=2 Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.638569 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-z4g9j"] Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.647417 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-z4g9j"] Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.658459 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.759299 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-55bpv"] Mar 08 00:49:19 crc kubenswrapper[4762]: E0308 00:49:19.760258 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="extract-content" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.760359 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="extract-content" Mar 08 00:49:19 crc kubenswrapper[4762]: E0308 00:49:19.760479 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="extract-utilities" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.760549 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="extract-utilities" Mar 08 00:49:19 crc kubenswrapper[4762]: E0308 00:49:19.760626 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="registry-server" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.760676 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="registry-server" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.760936 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerName="registry-server" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.762126 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.773647 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-55bpv"] Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.834677 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krst9\" (UniqueName: \"kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9\") pod \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.835025 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities\") pod \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.835231 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content\") pod \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\" (UID: \"e1e0d5f4-f07f-480f-95ba-65137e937a0d\") " Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.835977 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities" (OuterVolumeSpecName: "utilities") pod "e1e0d5f4-f07f-480f-95ba-65137e937a0d" (UID: "e1e0d5f4-f07f-480f-95ba-65137e937a0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.840884 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9" (OuterVolumeSpecName: "kube-api-access-krst9") pod "e1e0d5f4-f07f-480f-95ba-65137e937a0d" (UID: "e1e0d5f4-f07f-480f-95ba-65137e937a0d"). InnerVolumeSpecName "kube-api-access-krst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.860340 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1e0d5f4-f07f-480f-95ba-65137e937a0d" (UID: "e1e0d5f4-f07f-480f-95ba-65137e937a0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937108 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937171 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86qzc\" (UniqueName: \"kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937463 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937951 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krst9\" (UniqueName: \"kubernetes.io/projected/e1e0d5f4-f07f-480f-95ba-65137e937a0d-kube-api-access-krst9\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937973 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:19 crc kubenswrapper[4762]: I0308 00:49:19.937986 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e0d5f4-f07f-480f-95ba-65137e937a0d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.035926 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.036182 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4rfhh" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="registry-server" containerID="cri-o://8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3" gracePeriod=2 Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.040522 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.040687 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.040728 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86qzc\" (UniqueName: \"kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.046245 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.050184 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060120 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" containerID="9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a" exitCode=0 Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060172 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerDied","Data":"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a"} Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060198 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckmx" event={"ID":"e1e0d5f4-f07f-480f-95ba-65137e937a0d","Type":"ContainerDied","Data":"29c272908d2ec966803ad5c3c0f98864ba90a91012539c977a9784530c78e724"} Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060214 4762 scope.go:117] "RemoveContainer" containerID="9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060333 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckmx" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.060960 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86qzc\" (UniqueName: \"kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc\") pod \"heat-db-sync-55bpv\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.080014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerStarted","Data":"434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7"} Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.084413 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-55bpv" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.100815 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j9r4z" podStartSLOduration=2.6317397639999998 podStartE2EDuration="6.100799298s" podCreationTimestamp="2026-03-08 00:49:14 +0000 UTC" firstStartedPulling="2026-03-08 00:49:15.991532646 +0000 UTC m=+1577.465676990" lastFinishedPulling="2026-03-08 00:49:19.46059218 +0000 UTC m=+1580.934736524" observedRunningTime="2026-03-08 00:49:20.100013435 +0000 UTC m=+1581.574157779" watchObservedRunningTime="2026-03-08 00:49:20.100799298 +0000 UTC m=+1581.574943642" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.245690 4762 scope.go:117] "RemoveContainer" containerID="aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.265884 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.278174 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckmx"] Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.297009 4762 scope.go:117] "RemoveContainer" containerID="dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.379594 4762 scope.go:117] "RemoveContainer" containerID="9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a" Mar 08 00:49:20 crc kubenswrapper[4762]: E0308 00:49:20.380830 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a\": container with ID starting with 9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a not found: ID does not exist" containerID="9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.380859 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a"} err="failed to get container status \"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a\": rpc error: code = NotFound desc = could not find container \"9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a\": container with ID starting with 9779cb1cd9e5d66c5f76bef120132555182da13c6783c78f746862d67b543f2a not found: ID does not exist" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.380878 4762 scope.go:117] "RemoveContainer" containerID="aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850" Mar 08 00:49:20 crc kubenswrapper[4762]: E0308 00:49:20.382006 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850\": container with ID starting with aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850 not found: ID does not exist" containerID="aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.382032 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850"} err="failed to get container status \"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850\": rpc error: code = NotFound desc = could not find container \"aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850\": container with ID starting with aa3d91e0b710898bc5b0bc4a6b924d9964f6731be077569ea83a856493daa850 not found: ID does not exist" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.382047 4762 scope.go:117] "RemoveContainer" containerID="dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516" Mar 08 00:49:20 crc kubenswrapper[4762]: E0308 00:49:20.382414 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516\": container with ID starting with dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516 not found: ID does not exist" containerID="dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.382435 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516"} err="failed to get container status \"dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516\": rpc error: code = NotFound desc = could not find container \"dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516\": container with ID starting with dbf1c5dd8766619b66b4837f2c32b0e1443509daa3acf911fe9fc33574d1a516 not found: ID does not exist" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.620327 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.688220 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-55bpv"] Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.765295 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities\") pod \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.765457 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rk9r\" (UniqueName: \"kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r\") pod \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.765569 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content\") pod \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\" (UID: \"4cb2b2f6-4c97-4894-b629-2e77cb1e732e\") " Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.766034 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities" (OuterVolumeSpecName: "utilities") pod "4cb2b2f6-4c97-4894-b629-2e77cb1e732e" (UID: "4cb2b2f6-4c97-4894-b629-2e77cb1e732e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.773062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r" (OuterVolumeSpecName: "kube-api-access-8rk9r") pod "4cb2b2f6-4c97-4894-b629-2e77cb1e732e" (UID: "4cb2b2f6-4c97-4894-b629-2e77cb1e732e"). InnerVolumeSpecName "kube-api-access-8rk9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.823444 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cb2b2f6-4c97-4894-b629-2e77cb1e732e" (UID: "4cb2b2f6-4c97-4894-b629-2e77cb1e732e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.867668 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.867701 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:20 crc kubenswrapper[4762]: I0308 00:49:20.867712 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rk9r\" (UniqueName: \"kubernetes.io/projected/4cb2b2f6-4c97-4894-b629-2e77cb1e732e-kube-api-access-8rk9r\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.096524 4762 generic.go:334] "Generic (PLEG): container finished" podID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerID="8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3" exitCode=0 Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.096636 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerDied","Data":"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3"} Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.096639 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rfhh" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.096671 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rfhh" event={"ID":"4cb2b2f6-4c97-4894-b629-2e77cb1e732e","Type":"ContainerDied","Data":"7d6f3cd8a8effb0f316a8f6270d4eb49814162dcc421da8007eeca51ad8f04da"} Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.096691 4762 scope.go:117] "RemoveContainer" containerID="8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.101247 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-55bpv" event={"ID":"972a5997-389c-467b-ae2f-bc678f076277","Type":"ContainerStarted","Data":"d468800e17cfdea391d2a772e368a1725266e28000458ef4c23191acd88e8a4c"} Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.134913 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.141778 4762 scope.go:117] "RemoveContainer" containerID="673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.147457 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4rfhh"] Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.194745 4762 scope.go:117] "RemoveContainer" containerID="daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.254012 4762 scope.go:117] "RemoveContainer" containerID="8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3" Mar 08 00:49:21 crc kubenswrapper[4762]: E0308 00:49:21.256209 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3\": container with ID starting with 8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3 not found: ID does not exist" containerID="8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.256262 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3"} err="failed to get container status \"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3\": rpc error: code = NotFound desc = could not find container \"8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3\": container with ID starting with 8a002be665a885d3aaceb93d314ec5f6303d408fe035cbc37dbdbb66c90286d3 not found: ID does not exist" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.256294 4762 scope.go:117] "RemoveContainer" containerID="673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a" Mar 08 00:49:21 crc kubenswrapper[4762]: E0308 00:49:21.256717 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a\": container with ID starting with 673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a not found: ID does not exist" containerID="673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.256742 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a"} err="failed to get container status \"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a\": rpc error: code = NotFound desc = could not find container \"673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a\": container with ID starting with 673151de4f888984fbee7c76c1ad8f06dce3ad2ed4d72fd8c4b7594c3e80cf5a not found: ID does not exist" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.256780 4762 scope.go:117] "RemoveContainer" containerID="daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0" Mar 08 00:49:21 crc kubenswrapper[4762]: E0308 00:49:21.257810 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0\": container with ID starting with daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0 not found: ID does not exist" containerID="daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.257840 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0"} err="failed to get container status \"daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0\": rpc error: code = NotFound desc = could not find container \"daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0\": container with ID starting with daea05a7faecbc34b8b9b26fbac167e5b69cc40a9aa0d8ec6e9ebfbcc7b849c0 not found: ID does not exist" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.306748 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4992e7da-9de7-4354-a35f-a68f8bd0013a" path="/var/lib/kubelet/pods/4992e7da-9de7-4354-a35f-a68f8bd0013a/volumes" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.307350 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" path="/var/lib/kubelet/pods/4cb2b2f6-4c97-4894-b629-2e77cb1e732e/volumes" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.308299 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e0d5f4-f07f-480f-95ba-65137e937a0d" path="/var/lib/kubelet/pods/e1e0d5f4-f07f-480f-95ba-65137e937a0d/volumes" Mar 08 00:49:21 crc kubenswrapper[4762]: I0308 00:49:21.993284 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.297007 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.297287 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-central-agent" containerID="cri-o://3b31c4e8549e97b665559023fec50ff5e239357d232909b11e852f28b41aff8e" gracePeriod=30 Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.297416 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="proxy-httpd" containerID="cri-o://1b45f34444a30f81a42a8c3c8e9ac6ebc989d958950e98a6b42054b180fd433f" gracePeriod=30 Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.297451 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="sg-core" containerID="cri-o://ea885090d468e66ad65b0419df4f851d2e1eb037b3a0ec033847077dedbfafc9" gracePeriod=30 Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.297483 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-notification-agent" containerID="cri-o://47d80270bf9cb377936c21b36c15bdae005573fe2e91bc9454d084af39f803e6" gracePeriod=30 Mar 08 00:49:22 crc kubenswrapper[4762]: I0308 00:49:22.946462 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175475 4762 generic.go:334] "Generic (PLEG): container finished" podID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerID="1b45f34444a30f81a42a8c3c8e9ac6ebc989d958950e98a6b42054b180fd433f" exitCode=0 Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175505 4762 generic.go:334] "Generic (PLEG): container finished" podID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerID="ea885090d468e66ad65b0419df4f851d2e1eb037b3a0ec033847077dedbfafc9" exitCode=2 Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175529 4762 generic.go:334] "Generic (PLEG): container finished" podID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerID="47d80270bf9cb377936c21b36c15bdae005573fe2e91bc9454d084af39f803e6" exitCode=0 Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175536 4762 generic.go:334] "Generic (PLEG): container finished" podID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerID="3b31c4e8549e97b665559023fec50ff5e239357d232909b11e852f28b41aff8e" exitCode=0 Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175555 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerDied","Data":"1b45f34444a30f81a42a8c3c8e9ac6ebc989d958950e98a6b42054b180fd433f"} Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175577 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerDied","Data":"ea885090d468e66ad65b0419df4f851d2e1eb037b3a0ec033847077dedbfafc9"} Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175586 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerDied","Data":"47d80270bf9cb377936c21b36c15bdae005573fe2e91bc9454d084af39f803e6"} Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.175611 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerDied","Data":"3b31c4e8549e97b665559023fec50ff5e239357d232909b11e852f28b41aff8e"} Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.672094 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748143 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748246 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748300 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748342 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748497 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748639 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.748680 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87k64\" (UniqueName: \"kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64\") pod \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\" (UID: \"5d1a02f4-f981-4697-bdc9-d2a3119e46bb\") " Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.750452 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.750536 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.755543 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts" (OuterVolumeSpecName: "scripts") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.777950 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64" (OuterVolumeSpecName: "kube-api-access-87k64") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "kube-api-access-87k64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.802281 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.850658 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87k64\" (UniqueName: \"kubernetes.io/projected/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-kube-api-access-87k64\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.850688 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.850698 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.850707 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.850717 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.857942 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.871922 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.950232 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data" (OuterVolumeSpecName: "config-data") pod "5d1a02f4-f981-4697-bdc9-d2a3119e46bb" (UID: "5d1a02f4-f981-4697-bdc9-d2a3119e46bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.953026 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.953067 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:23 crc kubenswrapper[4762]: I0308 00:49:23.953079 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1a02f4-f981-4697-bdc9-d2a3119e46bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.187673 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d1a02f4-f981-4697-bdc9-d2a3119e46bb","Type":"ContainerDied","Data":"229f67554a5f6927a78643772ac96525d9feaae884c21c1ca39f030a0cc5b2bb"} Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.187724 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.187728 4762 scope.go:117] "RemoveContainer" containerID="1b45f34444a30f81a42a8c3c8e9ac6ebc989d958950e98a6b42054b180fd433f" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.221355 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.229945 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.231430 4762 scope.go:117] "RemoveContainer" containerID="ea885090d468e66ad65b0419df4f851d2e1eb037b3a0ec033847077dedbfafc9" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.270856 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271259 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="proxy-httpd" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271270 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="proxy-httpd" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271282 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-notification-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271289 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-notification-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271302 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-central-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271309 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-central-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271327 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="sg-core" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271333 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="sg-core" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271342 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="registry-server" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271348 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="registry-server" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271355 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="extract-utilities" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271361 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="extract-utilities" Mar 08 00:49:24 crc kubenswrapper[4762]: E0308 00:49:24.271376 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="extract-content" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271382 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="extract-content" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271568 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="proxy-httpd" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271585 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-notification-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271597 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="ceilometer-central-agent" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271607 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cb2b2f6-4c97-4894-b629-2e77cb1e732e" containerName="registry-server" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.271621 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" containerName="sg-core" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.283619 4762 scope.go:117] "RemoveContainer" containerID="47d80270bf9cb377936c21b36c15bdae005573fe2e91bc9454d084af39f803e6" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.285520 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.285611 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.288261 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.288865 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.289051 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.328413 4762 scope.go:117] "RemoveContainer" containerID="3b31c4e8549e97b665559023fec50ff5e239357d232909b11e852f28b41aff8e" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.361778 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.361839 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.361867 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghsxm\" (UniqueName: \"kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.362034 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.362168 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.362313 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.362385 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.363975 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.468927 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469016 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469069 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghsxm\" (UniqueName: \"kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469128 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469161 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469201 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469229 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.469675 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.470565 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.473997 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.474115 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.474231 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.475617 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.478886 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.489681 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghsxm\" (UniqueName: \"kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm\") pod \"ceilometer-0\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.601247 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.951035 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:24 crc kubenswrapper[4762]: I0308 00:49:24.951410 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:25 crc kubenswrapper[4762]: I0308 00:49:25.050168 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:25 crc kubenswrapper[4762]: I0308 00:49:25.280221 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1a02f4-f981-4697-bdc9-d2a3119e46bb" path="/var/lib/kubelet/pods/5d1a02f4-f981-4697-bdc9-d2a3119e46bb/volumes" Mar 08 00:49:25 crc kubenswrapper[4762]: I0308 00:49:25.289066 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:25 crc kubenswrapper[4762]: I0308 00:49:25.299744 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 00:49:26 crc kubenswrapper[4762]: I0308 00:49:26.228261 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerStarted","Data":"6603ef2c6933ac34bbb2b2c96575d8b11634ab0d9441dcebe513a55e83c76d07"} Mar 08 00:49:26 crc kubenswrapper[4762]: I0308 00:49:26.408641 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:26 crc kubenswrapper[4762]: I0308 00:49:26.694020 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="rabbitmq" containerID="cri-o://32123171473b9e7900d27f96249cf6b9cd735efec4b5f24853235b76f21252f0" gracePeriod=604796 Mar 08 00:49:27 crc kubenswrapper[4762]: I0308 00:49:27.236465 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j9r4z" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="registry-server" containerID="cri-o://434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" gracePeriod=2 Mar 08 00:49:27 crc kubenswrapper[4762]: I0308 00:49:27.349870 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" containerID="cri-o://3cad6bed315c16867c9504799cfea0ea54b8be9f89ee1a5dc667c079b4f2b098" gracePeriod=604796 Mar 08 00:49:28 crc kubenswrapper[4762]: I0308 00:49:28.133520 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.123:5671: connect: connection refused" Mar 08 00:49:28 crc kubenswrapper[4762]: I0308 00:49:28.250635 4762 generic.go:334] "Generic (PLEG): container finished" podID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerID="434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" exitCode=0 Mar 08 00:49:28 crc kubenswrapper[4762]: I0308 00:49:28.250685 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerDied","Data":"434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7"} Mar 08 00:49:28 crc kubenswrapper[4762]: I0308 00:49:28.430550 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.124:5671: connect: connection refused" Mar 08 00:49:33 crc kubenswrapper[4762]: I0308 00:49:33.321114 4762 generic.go:334] "Generic (PLEG): container finished" podID="a759d745-52d2-48f8-9848-172ace2b5120" containerID="32123171473b9e7900d27f96249cf6b9cd735efec4b5f24853235b76f21252f0" exitCode=0 Mar 08 00:49:33 crc kubenswrapper[4762]: I0308 00:49:33.321196 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerDied","Data":"32123171473b9e7900d27f96249cf6b9cd735efec4b5f24853235b76f21252f0"} Mar 08 00:49:34 crc kubenswrapper[4762]: I0308 00:49:34.335685 4762 generic.go:334] "Generic (PLEG): container finished" podID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerID="3cad6bed315c16867c9504799cfea0ea54b8be9f89ee1a5dc667c079b4f2b098" exitCode=0 Mar 08 00:49:34 crc kubenswrapper[4762]: I0308 00:49:34.335789 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerDied","Data":"3cad6bed315c16867c9504799cfea0ea54b8be9f89ee1a5dc667c079b4f2b098"} Mar 08 00:49:34 crc kubenswrapper[4762]: E0308 00:49:34.951629 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7 is running failed: container process not found" containerID="434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 00:49:34 crc kubenswrapper[4762]: E0308 00:49:34.952024 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7 is running failed: container process not found" containerID="434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 00:49:34 crc kubenswrapper[4762]: E0308 00:49:34.952314 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7 is running failed: container process not found" containerID="434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 00:49:34 crc kubenswrapper[4762]: E0308 00:49:34.952365 4762 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-j9r4z" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="registry-server" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.687334 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761093 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761148 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761203 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761283 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761374 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfsrh\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761415 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761444 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761519 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761543 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761638 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.761723 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf\") pod \"a759d745-52d2-48f8-9848-172ace2b5120\" (UID: \"a759d745-52d2-48f8-9848-172ace2b5120\") " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.776067 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.777327 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.780000 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.795873 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.811727 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh" (OuterVolumeSpecName: "kube-api-access-lfsrh") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "kube-api-access-lfsrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.812921 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.814950 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.841153 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info" (OuterVolumeSpecName: "pod-info") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869214 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869248 4762 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a759d745-52d2-48f8-9848-172ace2b5120-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869258 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfsrh\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-kube-api-access-lfsrh\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869267 4762 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a759d745-52d2-48f8-9848-172ace2b5120-pod-info\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869279 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869286 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869309 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869318 4762 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:35 crc kubenswrapper[4762]: I0308 00:49:35.869519 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data" (OuterVolumeSpecName: "config-data") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.087114 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.087839 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.090580 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf" (OuterVolumeSpecName: "server-conf") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.138323 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a759d745-52d2-48f8-9848-172ace2b5120" (UID: "a759d745-52d2-48f8-9848-172ace2b5120"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.193631 4762 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a759d745-52d2-48f8-9848-172ace2b5120-server-conf\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.193670 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a759d745-52d2-48f8-9848-172ace2b5120-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.193682 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.361109 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a759d745-52d2-48f8-9848-172ace2b5120","Type":"ContainerDied","Data":"e6c11f244b0f35c7f53574825260a487dc5bcdfea5a34a26b892024e79f85d90"} Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.361187 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.361203 4762 scope.go:117] "RemoveContainer" containerID="32123171473b9e7900d27f96249cf6b9cd735efec4b5f24853235b76f21252f0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.409711 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.430864 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.453815 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:36 crc kubenswrapper[4762]: E0308 00:49:36.454347 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="setup-container" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.454368 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="setup-container" Mar 08 00:49:36 crc kubenswrapper[4762]: E0308 00:49:36.454390 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="rabbitmq" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.454399 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="rabbitmq" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.454689 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a759d745-52d2-48f8-9848-172ace2b5120" containerName="rabbitmq" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.456164 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.461337 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.461565 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.461808 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.462078 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.462161 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-qpbz2" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.463026 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.463186 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.468408 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.500849 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.500923 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.500953 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83567ea1-f607-4be2-b0af-6d09bcf74e06-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501005 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501030 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501277 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-767wd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-kube-api-access-767wd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501335 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-config-data\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501364 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501394 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83567ea1-f607-4be2-b0af-6d09bcf74e06-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501455 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.501482 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603531 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603587 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603623 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603684 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603710 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83567ea1-f607-4be2-b0af-6d09bcf74e06-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603793 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603825 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603853 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-767wd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-kube-api-access-767wd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603912 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-config-data\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603948 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.603973 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83567ea1-f607-4be2-b0af-6d09bcf74e06-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.605593 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.606213 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.606331 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.606335 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-config-data\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.609304 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-server-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.611171 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/83567ea1-f607-4be2-b0af-6d09bcf74e06-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.617364 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/83567ea1-f607-4be2-b0af-6d09bcf74e06-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.621243 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.622058 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.622959 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/83567ea1-f607-4be2-b0af-6d09bcf74e06-pod-info\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.632680 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-767wd\" (UniqueName: \"kubernetes.io/projected/83567ea1-f607-4be2-b0af-6d09bcf74e06-kube-api-access-767wd\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.656686 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"83567ea1-f607-4be2-b0af-6d09bcf74e06\") " pod="openstack/rabbitmq-server-0" Mar 08 00:49:36 crc kubenswrapper[4762]: I0308 00:49:36.789322 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 08 00:49:37 crc kubenswrapper[4762]: I0308 00:49:37.274436 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a759d745-52d2-48f8-9848-172ace2b5120" path="/var/lib/kubelet/pods/a759d745-52d2-48f8-9848-172ace2b5120/volumes" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.134309 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.123:5671: connect: connection refused" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.569235 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.570900 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.575570 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.586128 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.650614 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl7cj\" (UniqueName: \"kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.650698 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.650755 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.650973 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.651038 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.651068 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.651109 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752573 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752627 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752692 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl7cj\" (UniqueName: \"kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752708 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752738 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752873 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.752908 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.753892 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.753949 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.753949 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.755381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.756526 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.756641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.787458 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl7cj\" (UniqueName: \"kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj\") pod \"dnsmasq-dns-5b75489c6f-8wmgv\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:38 crc kubenswrapper[4762]: I0308 00:49:38.913309 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.694009 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.792308 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content\") pod \"469bdd06-dc45-43a3-9537-51dca073b9b0\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.792597 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities\") pod \"469bdd06-dc45-43a3-9537-51dca073b9b0\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.792715 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkqrn\" (UniqueName: \"kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn\") pod \"469bdd06-dc45-43a3-9537-51dca073b9b0\" (UID: \"469bdd06-dc45-43a3-9537-51dca073b9b0\") " Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.794583 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities" (OuterVolumeSpecName: "utilities") pod "469bdd06-dc45-43a3-9537-51dca073b9b0" (UID: "469bdd06-dc45-43a3-9537-51dca073b9b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.799200 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn" (OuterVolumeSpecName: "kube-api-access-wkqrn") pod "469bdd06-dc45-43a3-9537-51dca073b9b0" (UID: "469bdd06-dc45-43a3-9537-51dca073b9b0"). InnerVolumeSpecName "kube-api-access-wkqrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.848937 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "469bdd06-dc45-43a3-9537-51dca073b9b0" (UID: "469bdd06-dc45-43a3-9537-51dca073b9b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.895597 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.895640 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/469bdd06-dc45-43a3-9537-51dca073b9b0-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:40 crc kubenswrapper[4762]: I0308 00:49:40.895655 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkqrn\" (UniqueName: \"kubernetes.io/projected/469bdd06-dc45-43a3-9537-51dca073b9b0-kube-api-access-wkqrn\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:41 crc kubenswrapper[4762]: E0308 00:49:41.096516 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 08 00:49:41 crc kubenswrapper[4762]: E0308 00:49:41.096572 4762 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Mar 08 00:49:41 crc kubenswrapper[4762]: E0308 00:49:41.096698 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-86qzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-55bpv_openstack(972a5997-389c-467b-ae2f-bc678f076277): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 00:49:41 crc kubenswrapper[4762]: E0308 00:49:41.098401 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-55bpv" podUID="972a5997-389c-467b-ae2f-bc678f076277" Mar 08 00:49:41 crc kubenswrapper[4762]: I0308 00:49:41.416173 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j9r4z" event={"ID":"469bdd06-dc45-43a3-9537-51dca073b9b0","Type":"ContainerDied","Data":"6f67734d30a71f19c68e22bb1ef04b12b90afb9be21271fb0615521e7a04dec4"} Mar 08 00:49:41 crc kubenswrapper[4762]: I0308 00:49:41.416219 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j9r4z" Mar 08 00:49:41 crc kubenswrapper[4762]: E0308 00:49:41.418286 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-55bpv" podUID="972a5997-389c-467b-ae2f-bc678f076277" Mar 08 00:49:41 crc kubenswrapper[4762]: I0308 00:49:41.457227 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:41 crc kubenswrapper[4762]: I0308 00:49:41.467284 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j9r4z"] Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.251588 4762 scope.go:117] "RemoveContainer" containerID="7bdb2ea1f65eb7942f2f7e3865b6d5415488a631bf1ade298c09336b2f2e6d96" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.385291 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.454076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"543cbbde-da2d-43c4-87f9-85f8e4e90101","Type":"ContainerDied","Data":"c4d7180f7793f78335eaf0ab2d0ccdfa1fb03762f9b6bb2361d29db7b975b222"} Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.454164 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.535769 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.535811 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.535848 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.535944 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536086 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536121 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536145 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536198 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwlj2\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536224 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536253 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.536270 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf\") pod \"543cbbde-da2d-43c4-87f9-85f8e4e90101\" (UID: \"543cbbde-da2d-43c4-87f9-85f8e4e90101\") " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.538370 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.538470 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.539204 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.542097 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.547634 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.550245 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.554925 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info" (OuterVolumeSpecName: "pod-info") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.555241 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2" (OuterVolumeSpecName: "kube-api-access-vwlj2") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "kube-api-access-vwlj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.574697 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data" (OuterVolumeSpecName: "config-data") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.601461 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf" (OuterVolumeSpecName: "server-conf") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643123 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643178 4762 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/543cbbde-da2d-43c4-87f9-85f8e4e90101-pod-info\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643192 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643205 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643246 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643261 4762 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643275 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwlj2\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-kube-api-access-vwlj2\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643288 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643307 4762 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/543cbbde-da2d-43c4-87f9-85f8e4e90101-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.643319 4762 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/543cbbde-da2d-43c4-87f9-85f8e4e90101-server-conf\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.687972 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "543cbbde-da2d-43c4-87f9-85f8e4e90101" (UID: "543cbbde-da2d-43c4-87f9-85f8e4e90101"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.688975 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.753046 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.753087 4762 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/543cbbde-da2d-43c4-87f9-85f8e4e90101-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.864753 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.878104 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.891935 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:42 crc kubenswrapper[4762]: E0308 00:49:42.892714 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="registry-server" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.892847 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="registry-server" Mar 08 00:49:42 crc kubenswrapper[4762]: E0308 00:49:42.892923 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="extract-utilities" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.892992 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="extract-utilities" Mar 08 00:49:42 crc kubenswrapper[4762]: E0308 00:49:42.893048 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="setup-container" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.893100 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="setup-container" Mar 08 00:49:42 crc kubenswrapper[4762]: E0308 00:49:42.893170 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="extract-content" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.893220 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="extract-content" Mar 08 00:49:42 crc kubenswrapper[4762]: E0308 00:49:42.893277 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.893330 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.893587 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" containerName="registry-server" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.893665 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" containerName="rabbitmq" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.895200 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.899587 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.899737 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.899940 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.900118 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.900229 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.900426 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.900679 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-brbgv" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.911381 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.956728 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65qjc\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-kube-api-access-65qjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.956903 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.956977 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.956999 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957035 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cea7862c-6515-43de-826c-87e285980ca0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957073 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957106 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957144 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957176 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957210 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cea7862c-6515-43de-826c-87e285980ca0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:42 crc kubenswrapper[4762]: I0308 00:49:42.957231 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059383 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059533 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cea7862c-6515-43de-826c-87e285980ca0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059575 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059624 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65qjc\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-kube-api-access-65qjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059682 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059721 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059744 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059805 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cea7862c-6515-43de-826c-87e285980ca0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059853 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059897 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059949 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.059853 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.061185 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.061411 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.061856 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.062047 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.062476 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/cea7862c-6515-43de-826c-87e285980ca0-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.063955 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.064489 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.065308 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/cea7862c-6515-43de-826c-87e285980ca0-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.065342 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/cea7862c-6515-43de-826c-87e285980ca0-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.077497 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65qjc\" (UniqueName: \"kubernetes.io/projected/cea7862c-6515-43de-826c-87e285980ca0-kube-api-access-65qjc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.100625 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"cea7862c-6515-43de-826c-87e285980ca0\") " pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.237851 4762 scope.go:117] "RemoveContainer" containerID="434d423e6001833c7865a708ebacf64aae915447e89050e250b66d768f4a0ca7" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.243593 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.277649 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469bdd06-dc45-43a3-9537-51dca073b9b0" path="/var/lib/kubelet/pods/469bdd06-dc45-43a3-9537-51dca073b9b0/volumes" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.278862 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="543cbbde-da2d-43c4-87f9-85f8e4e90101" path="/var/lib/kubelet/pods/543cbbde-da2d-43c4-87f9-85f8e4e90101/volumes" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.336191 4762 scope.go:117] "RemoveContainer" containerID="3474f38faca4fc31ef89dda2088ef64b678a43328d54d799b1ada1618beca615" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.456100 4762 scope.go:117] "RemoveContainer" containerID="1d15812b57943ec9af2c22c7cf035bfc7f080ab636c2473d6263f1899db3cc25" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.551314 4762 scope.go:117] "RemoveContainer" containerID="3cad6bed315c16867c9504799cfea0ea54b8be9f89ee1a5dc667c079b4f2b098" Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.583290 4762 scope.go:117] "RemoveContainer" containerID="da1f3f1b9b29fd8f8086fd577e095c3fa0723111de086ccb0c46eea6316e3241" Mar 08 00:49:43 crc kubenswrapper[4762]: W0308 00:49:43.717028 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod605d5ec2_5c55_4d38_8403_d2f2f19ab8e1.slice/crio-c9288d9c573b57c5c167479e1b613d233b48e79a265a7224a5defd05d760f22d WatchSource:0}: Error finding container c9288d9c573b57c5c167479e1b613d233b48e79a265a7224a5defd05d760f22d: Status 404 returned error can't find the container with id c9288d9c573b57c5c167479e1b613d233b48e79a265a7224a5defd05d760f22d Mar 08 00:49:43 crc kubenswrapper[4762]: W0308 00:49:43.720396 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83567ea1_f607_4be2_b0af_6d09bcf74e06.slice/crio-a27e06a65cfa58cd51f03d24ed880edfdf25564d5bf224db9335d3ec029d0f42 WatchSource:0}: Error finding container a27e06a65cfa58cd51f03d24ed880edfdf25564d5bf224db9335d3ec029d0f42: Status 404 returned error can't find the container with id a27e06a65cfa58cd51f03d24ed880edfdf25564d5bf224db9335d3ec029d0f42 Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.722368 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.736328 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 08 00:49:43 crc kubenswrapper[4762]: I0308 00:49:43.848736 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.501636 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerStarted","Data":"5a3b9598d8c3d9367c7ff01d198b875e2be8ed0ff2556b7f1a37464a26e12fe7"} Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.508167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cea7862c-6515-43de-826c-87e285980ca0","Type":"ContainerStarted","Data":"d3f6abd9a9c40f257de191b974f031e7013327022cf2543256b4d973ddfce757"} Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.515999 4762 generic.go:334] "Generic (PLEG): container finished" podID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerID="103ce4e829d702a15656cd916a4eda79a07a82ed6d00daffe4e8e19586018014" exitCode=0 Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.516057 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" event={"ID":"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1","Type":"ContainerDied","Data":"103ce4e829d702a15656cd916a4eda79a07a82ed6d00daffe4e8e19586018014"} Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.516085 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" event={"ID":"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1","Type":"ContainerStarted","Data":"c9288d9c573b57c5c167479e1b613d233b48e79a265a7224a5defd05d760f22d"} Mar 08 00:49:44 crc kubenswrapper[4762]: I0308 00:49:44.521218 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83567ea1-f607-4be2-b0af-6d09bcf74e06","Type":"ContainerStarted","Data":"a27e06a65cfa58cd51f03d24ed880edfdf25564d5bf224db9335d3ec029d0f42"} Mar 08 00:49:45 crc kubenswrapper[4762]: I0308 00:49:45.540959 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" event={"ID":"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1","Type":"ContainerStarted","Data":"7e6b5f855284c8faaf5bb489e300ebfa2761eec1d5246980af64a6b87b735c97"} Mar 08 00:49:45 crc kubenswrapper[4762]: I0308 00:49:45.541426 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:45 crc kubenswrapper[4762]: I0308 00:49:45.544721 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerStarted","Data":"0a1b14c854cb50902da15746f6462cf318ba092defa5ffd08d6746ef9847d470"} Mar 08 00:49:46 crc kubenswrapper[4762]: I0308 00:49:46.560508 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerStarted","Data":"240925bd14a1b5b2baa148f2e674073065cdc345e398514d38ceaaef3bcd9b5f"} Mar 08 00:49:46 crc kubenswrapper[4762]: I0308 00:49:46.565794 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cea7862c-6515-43de-826c-87e285980ca0","Type":"ContainerStarted","Data":"e500514b5ba4d6d2155370fc96579af7a0e7b30ae364ad245c503e15c1ba5d16"} Mar 08 00:49:46 crc kubenswrapper[4762]: I0308 00:49:46.624366 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" podStartSLOduration=8.624342796 podStartE2EDuration="8.624342796s" podCreationTimestamp="2026-03-08 00:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:49:45.574159389 +0000 UTC m=+1607.048303763" watchObservedRunningTime="2026-03-08 00:49:46.624342796 +0000 UTC m=+1608.098487150" Mar 08 00:49:47 crc kubenswrapper[4762]: I0308 00:49:47.593436 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83567ea1-f607-4be2-b0af-6d09bcf74e06","Type":"ContainerStarted","Data":"0144162d6e612bcc4933713908f44d044e74c82558fcc40a947791846f1dc939"} Mar 08 00:49:48 crc kubenswrapper[4762]: I0308 00:49:48.616135 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerStarted","Data":"b7af471a865b12dd686483a2062192e5a997f6bd613475cd6ac179e46b3ffae2"} Mar 08 00:49:48 crc kubenswrapper[4762]: I0308 00:49:48.617024 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 00:49:48 crc kubenswrapper[4762]: I0308 00:49:48.676582 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.496076126 podStartE2EDuration="24.676562244s" podCreationTimestamp="2026-03-08 00:49:24 +0000 UTC" firstStartedPulling="2026-03-08 00:49:25.315700429 +0000 UTC m=+1586.789844773" lastFinishedPulling="2026-03-08 00:49:47.496186517 +0000 UTC m=+1608.970330891" observedRunningTime="2026-03-08 00:49:48.649658114 +0000 UTC m=+1610.123802478" watchObservedRunningTime="2026-03-08 00:49:48.676562244 +0000 UTC m=+1610.150706608" Mar 08 00:49:53 crc kubenswrapper[4762]: I0308 00:49:53.916157 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.032009 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.032276 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="dnsmasq-dns" containerID="cri-o://509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b" gracePeriod=10 Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.218488 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.220342 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.241512 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323298 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323384 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323415 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323522 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323644 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h89xf\" (UniqueName: \"kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.323675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.324022 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.428831 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.428911 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.428954 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.428972 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.428992 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.429029 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h89xf\" (UniqueName: \"kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.429045 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430044 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430117 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430174 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430271 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.430434 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.448934 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h89xf\" (UniqueName: \"kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf\") pod \"dnsmasq-dns-5cf7b6cbf7-qlc7x\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.547239 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.667725 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.711306 4762 generic.go:334] "Generic (PLEG): container finished" podID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerID="509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b" exitCode=0 Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.711370 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" event={"ID":"12af2dd4-bf20-4ddf-81d2-d27e181b934f","Type":"ContainerDied","Data":"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b"} Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.711410 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" event={"ID":"12af2dd4-bf20-4ddf-81d2-d27e181b934f","Type":"ContainerDied","Data":"0c2435265fc98006d75e5f522d491aa69835417068c610f08ecb76fe95b04a57"} Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.711444 4762 scope.go:117] "RemoveContainer" containerID="509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.711648 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-h7wxf" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.734927 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.735041 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.735088 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.735127 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtlv4\" (UniqueName: \"kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.735160 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.735189 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config\") pod \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\" (UID: \"12af2dd4-bf20-4ddf-81d2-d27e181b934f\") " Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.764418 4762 scope.go:117] "RemoveContainer" containerID="5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.766408 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4" (OuterVolumeSpecName: "kube-api-access-jtlv4") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "kube-api-access-jtlv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.795491 4762 scope.go:117] "RemoveContainer" containerID="509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b" Mar 08 00:49:54 crc kubenswrapper[4762]: E0308 00:49:54.795950 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b\": container with ID starting with 509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b not found: ID does not exist" containerID="509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.795992 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b"} err="failed to get container status \"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b\": rpc error: code = NotFound desc = could not find container \"509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b\": container with ID starting with 509eab5269db47ee5f7acbd8f6b40d054ca6b7ea5d939918549c8f63d942674b not found: ID does not exist" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.796054 4762 scope.go:117] "RemoveContainer" containerID="5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.797660 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config" (OuterVolumeSpecName: "config") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: E0308 00:49:54.798254 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94\": container with ID starting with 5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94 not found: ID does not exist" containerID="5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.798308 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94"} err="failed to get container status \"5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94\": rpc error: code = NotFound desc = could not find container \"5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94\": container with ID starting with 5f84c0d25db48c19a37d4a3d0213248e57fb2e98b2abeaf66984e0880e831d94 not found: ID does not exist" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.800273 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.803706 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.813949 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.830652 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "12af2dd4-bf20-4ddf-81d2-d27e181b934f" (UID: "12af2dd4-bf20-4ddf-81d2-d27e181b934f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851456 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851492 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851506 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851516 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtlv4\" (UniqueName: \"kubernetes.io/projected/12af2dd4-bf20-4ddf-81d2-d27e181b934f-kube-api-access-jtlv4\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851525 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:54 crc kubenswrapper[4762]: I0308 00:49:54.851533 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12af2dd4-bf20-4ddf-81d2-d27e181b934f-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.029632 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.052238 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.063140 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-h7wxf"] Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.277813 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" path="/var/lib/kubelet/pods/12af2dd4-bf20-4ddf-81d2-d27e181b934f/volumes" Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.730058 4762 generic.go:334] "Generic (PLEG): container finished" podID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerID="6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550" exitCode=0 Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.730127 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" event={"ID":"adfd1d4d-0990-4fc3-a48c-39efca58f753","Type":"ContainerDied","Data":"6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550"} Mar 08 00:49:55 crc kubenswrapper[4762]: I0308 00:49:55.730154 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" event={"ID":"adfd1d4d-0990-4fc3-a48c-39efca58f753","Type":"ContainerStarted","Data":"f0fed0afb3a339cd8b5eb305f19d8e18a380e468850babf46d49acdc59f4e359"} Mar 08 00:49:56 crc kubenswrapper[4762]: I0308 00:49:56.753056 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" event={"ID":"adfd1d4d-0990-4fc3-a48c-39efca58f753","Type":"ContainerStarted","Data":"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8"} Mar 08 00:49:56 crc kubenswrapper[4762]: I0308 00:49:56.753397 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:49:57 crc kubenswrapper[4762]: I0308 00:49:57.771061 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-55bpv" event={"ID":"972a5997-389c-467b-ae2f-bc678f076277","Type":"ContainerStarted","Data":"68450cd8d95dabcb83a0e4ccc06c26f9b1893e25d494127354a4137a0519de2c"} Mar 08 00:49:57 crc kubenswrapper[4762]: I0308 00:49:57.811277 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-55bpv" podStartSLOduration=3.055941522 podStartE2EDuration="38.811256848s" podCreationTimestamp="2026-03-08 00:49:19 +0000 UTC" firstStartedPulling="2026-03-08 00:49:20.683975428 +0000 UTC m=+1582.158119772" lastFinishedPulling="2026-03-08 00:49:56.439290744 +0000 UTC m=+1617.913435098" observedRunningTime="2026-03-08 00:49:57.796845728 +0000 UTC m=+1619.270990112" watchObservedRunningTime="2026-03-08 00:49:57.811256848 +0000 UTC m=+1619.285401202" Mar 08 00:49:57 crc kubenswrapper[4762]: I0308 00:49:57.811967 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" podStartSLOduration=3.81195751 podStartE2EDuration="3.81195751s" podCreationTimestamp="2026-03-08 00:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:49:56.778680336 +0000 UTC m=+1618.252824700" watchObservedRunningTime="2026-03-08 00:49:57.81195751 +0000 UTC m=+1619.286101864" Mar 08 00:49:59 crc kubenswrapper[4762]: I0308 00:49:59.801050 4762 generic.go:334] "Generic (PLEG): container finished" podID="972a5997-389c-467b-ae2f-bc678f076277" containerID="68450cd8d95dabcb83a0e4ccc06c26f9b1893e25d494127354a4137a0519de2c" exitCode=0 Mar 08 00:49:59 crc kubenswrapper[4762]: I0308 00:49:59.801095 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-55bpv" event={"ID":"972a5997-389c-467b-ae2f-bc678f076277","Type":"ContainerDied","Data":"68450cd8d95dabcb83a0e4ccc06c26f9b1893e25d494127354a4137a0519de2c"} Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.155786 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548850-fbqsx"] Mar 08 00:50:00 crc kubenswrapper[4762]: E0308 00:50:00.158169 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="init" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.158204 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="init" Mar 08 00:50:00 crc kubenswrapper[4762]: E0308 00:50:00.158255 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="dnsmasq-dns" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.158267 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="dnsmasq-dns" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.158637 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="12af2dd4-bf20-4ddf-81d2-d27e181b934f" containerName="dnsmasq-dns" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.159873 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.162953 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.170098 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.170621 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.202487 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548850-fbqsx"] Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.282133 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nh8b\" (UniqueName: \"kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b\") pod \"auto-csr-approver-29548850-fbqsx\" (UID: \"1ec41a8c-126a-4fb2-972f-bad18afb2398\") " pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.384441 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nh8b\" (UniqueName: \"kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b\") pod \"auto-csr-approver-29548850-fbqsx\" (UID: \"1ec41a8c-126a-4fb2-972f-bad18afb2398\") " pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.427858 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nh8b\" (UniqueName: \"kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b\") pod \"auto-csr-approver-29548850-fbqsx\" (UID: \"1ec41a8c-126a-4fb2-972f-bad18afb2398\") " pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:00 crc kubenswrapper[4762]: I0308 00:50:00.498058 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.028239 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548850-fbqsx"] Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.162003 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-55bpv" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.301448 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86qzc\" (UniqueName: \"kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc\") pod \"972a5997-389c-467b-ae2f-bc678f076277\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.301696 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle\") pod \"972a5997-389c-467b-ae2f-bc678f076277\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.301950 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data\") pod \"972a5997-389c-467b-ae2f-bc678f076277\" (UID: \"972a5997-389c-467b-ae2f-bc678f076277\") " Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.307068 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc" (OuterVolumeSpecName: "kube-api-access-86qzc") pod "972a5997-389c-467b-ae2f-bc678f076277" (UID: "972a5997-389c-467b-ae2f-bc678f076277"). InnerVolumeSpecName "kube-api-access-86qzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.331230 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "972a5997-389c-467b-ae2f-bc678f076277" (UID: "972a5997-389c-467b-ae2f-bc678f076277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.382149 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data" (OuterVolumeSpecName: "config-data") pod "972a5997-389c-467b-ae2f-bc678f076277" (UID: "972a5997-389c-467b-ae2f-bc678f076277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.405809 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.406006 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/972a5997-389c-467b-ae2f-bc678f076277-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.406040 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86qzc\" (UniqueName: \"kubernetes.io/projected/972a5997-389c-467b-ae2f-bc678f076277-kube-api-access-86qzc\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.822308 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-55bpv" event={"ID":"972a5997-389c-467b-ae2f-bc678f076277","Type":"ContainerDied","Data":"d468800e17cfdea391d2a772e368a1725266e28000458ef4c23191acd88e8a4c"} Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.822581 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d468800e17cfdea391d2a772e368a1725266e28000458ef4c23191acd88e8a4c" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.822582 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-55bpv" Mar 08 00:50:01 crc kubenswrapper[4762]: I0308 00:50:01.823531 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" event={"ID":"1ec41a8c-126a-4fb2-972f-bad18afb2398","Type":"ContainerStarted","Data":"91f612945f25f4addd6a4f454cdc9aa2a27dad40656143f41ec26b50f798b84b"} Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.861525 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7588678759-6jpjt"] Mar 08 00:50:02 crc kubenswrapper[4762]: E0308 00:50:02.862423 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="972a5997-389c-467b-ae2f-bc678f076277" containerName="heat-db-sync" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.862440 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="972a5997-389c-467b-ae2f-bc678f076277" containerName="heat-db-sync" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.862658 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="972a5997-389c-467b-ae2f-bc678f076277" containerName="heat-db-sync" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.863442 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.867510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" event={"ID":"1ec41a8c-126a-4fb2-972f-bad18afb2398","Type":"ContainerStarted","Data":"000cb75f1ee491c6bd5d8e2cc10ed1b3e49115abb5148d46966fd22ac7f292d0"} Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.879010 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7588678759-6jpjt"] Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.911942 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-74bd49b68d-nxdxh"] Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.913505 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.935203 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74bd49b68d-nxdxh"] Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.936434 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data-custom\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.936465 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-combined-ca-bundle\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.936642 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzn6\" (UniqueName: \"kubernetes.io/projected/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-kube-api-access-jwzn6\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.936661 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.978863 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5f957556fb-8j7fl"] Mar 08 00:50:02 crc kubenswrapper[4762]: I0308 00:50:02.982024 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.004290 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f957556fb-8j7fl"] Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.012048 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" podStartSLOduration=1.9496166069999998 podStartE2EDuration="3.012033662s" podCreationTimestamp="2026-03-08 00:50:00 +0000 UTC" firstStartedPulling="2026-03-08 00:50:01.030641295 +0000 UTC m=+1622.504785639" lastFinishedPulling="2026-03-08 00:50:02.09305834 +0000 UTC m=+1623.567202694" observedRunningTime="2026-03-08 00:50:02.935346828 +0000 UTC m=+1624.409491172" watchObservedRunningTime="2026-03-08 00:50:03.012033662 +0000 UTC m=+1624.486178006" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039198 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-public-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039252 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwzn6\" (UniqueName: \"kubernetes.io/projected/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-kube-api-access-jwzn6\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039298 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-combined-ca-bundle\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039321 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data-custom\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039347 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039393 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdbh\" (UniqueName: \"kubernetes.io/projected/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-kube-api-access-drdbh\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-public-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039463 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039481 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-combined-ca-bundle\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039513 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data-custom\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039532 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data-custom\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039549 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-combined-ca-bundle\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039571 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-internal-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039632 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbps\" (UniqueName: \"kubernetes.io/projected/08f3408b-7c06-4574-b984-a4bd2ee0d99f-kube-api-access-txbps\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.039651 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-internal-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.057963 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data-custom\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.058010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-combined-ca-bundle\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.058507 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-config-data\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.060694 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwzn6\" (UniqueName: \"kubernetes.io/projected/9976fcf2-7f49-45af-afe2-d5c3e07f2cac-kube-api-access-jwzn6\") pod \"heat-engine-7588678759-6jpjt\" (UID: \"9976fcf2-7f49-45af-afe2-d5c3e07f2cac\") " pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.141563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdbh\" (UniqueName: \"kubernetes.io/projected/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-kube-api-access-drdbh\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.141978 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-public-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142024 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142041 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-combined-ca-bundle\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142059 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data-custom\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142086 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-internal-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142152 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbps\" (UniqueName: \"kubernetes.io/projected/08f3408b-7c06-4574-b984-a4bd2ee0d99f-kube-api-access-txbps\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142171 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-internal-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142192 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-public-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142223 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-combined-ca-bundle\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142245 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data-custom\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.142269 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.146484 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.147354 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-internal-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.149448 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-combined-ca-bundle\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.150098 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.151112 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-combined-ca-bundle\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.151243 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-config-data-custom\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.152453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-config-data-custom\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.153438 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-public-tls-certs\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.153692 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-internal-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.155130 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/08f3408b-7c06-4574-b984-a4bd2ee0d99f-public-tls-certs\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.165517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdbh\" (UniqueName: \"kubernetes.io/projected/d4a45fcf-70a0-4fc6-a592-f2c588936b3c-kube-api-access-drdbh\") pod \"heat-cfnapi-5f957556fb-8j7fl\" (UID: \"d4a45fcf-70a0-4fc6-a592-f2c588936b3c\") " pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.168449 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbps\" (UniqueName: \"kubernetes.io/projected/08f3408b-7c06-4574-b984-a4bd2ee0d99f-kube-api-access-txbps\") pod \"heat-api-74bd49b68d-nxdxh\" (UID: \"08f3408b-7c06-4574-b984-a4bd2ee0d99f\") " pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.196369 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.242990 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.312876 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.719908 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-74bd49b68d-nxdxh"] Mar 08 00:50:03 crc kubenswrapper[4762]: W0308 00:50:03.720006 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08f3408b_7c06_4574_b984_a4bd2ee0d99f.slice/crio-e6c3ff51593324338397b7c23be176560d341fa8be0ef0f86d91ccfa512640c5 WatchSource:0}: Error finding container e6c3ff51593324338397b7c23be176560d341fa8be0ef0f86d91ccfa512640c5: Status 404 returned error can't find the container with id e6c3ff51593324338397b7c23be176560d341fa8be0ef0f86d91ccfa512640c5 Mar 08 00:50:03 crc kubenswrapper[4762]: W0308 00:50:03.728749 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9976fcf2_7f49_45af_afe2_d5c3e07f2cac.slice/crio-00aa303517c53f1704b4ddc952e9a8e7b40a48bd8d14a130d643cdfaa8880d3f WatchSource:0}: Error finding container 00aa303517c53f1704b4ddc952e9a8e7b40a48bd8d14a130d643cdfaa8880d3f: Status 404 returned error can't find the container with id 00aa303517c53f1704b4ddc952e9a8e7b40a48bd8d14a130d643cdfaa8880d3f Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.730928 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7588678759-6jpjt"] Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.890221 4762 generic.go:334] "Generic (PLEG): container finished" podID="1ec41a8c-126a-4fb2-972f-bad18afb2398" containerID="000cb75f1ee491c6bd5d8e2cc10ed1b3e49115abb5148d46966fd22ac7f292d0" exitCode=0 Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.890337 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" event={"ID":"1ec41a8c-126a-4fb2-972f-bad18afb2398","Type":"ContainerDied","Data":"000cb75f1ee491c6bd5d8e2cc10ed1b3e49115abb5148d46966fd22ac7f292d0"} Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.896437 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74bd49b68d-nxdxh" event={"ID":"08f3408b-7c06-4574-b984-a4bd2ee0d99f","Type":"ContainerStarted","Data":"e6c3ff51593324338397b7c23be176560d341fa8be0ef0f86d91ccfa512640c5"} Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.898027 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588678759-6jpjt" event={"ID":"9976fcf2-7f49-45af-afe2-d5c3e07f2cac","Type":"ContainerStarted","Data":"00aa303517c53f1704b4ddc952e9a8e7b40a48bd8d14a130d643cdfaa8880d3f"} Mar 08 00:50:03 crc kubenswrapper[4762]: W0308 00:50:03.925539 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4a45fcf_70a0_4fc6_a592_f2c588936b3c.slice/crio-7125c6979cb3b5e596b9163c2db36d37d269b4b89d59202da289cd6405bf3b19 WatchSource:0}: Error finding container 7125c6979cb3b5e596b9163c2db36d37d269b4b89d59202da289cd6405bf3b19: Status 404 returned error can't find the container with id 7125c6979cb3b5e596b9163c2db36d37d269b4b89d59202da289cd6405bf3b19 Mar 08 00:50:03 crc kubenswrapper[4762]: I0308 00:50:03.928436 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5f957556fb-8j7fl"] Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.549922 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.622078 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.622342 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="dnsmasq-dns" containerID="cri-o://7e6b5f855284c8faaf5bb489e300ebfa2761eec1d5246980af64a6b87b735c97" gracePeriod=10 Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.913317 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7588678759-6jpjt" event={"ID":"9976fcf2-7f49-45af-afe2-d5c3e07f2cac","Type":"ContainerStarted","Data":"e7d891977525b35f526f644833c90521e07788244bb3aaad70f7b2a9e2a53112"} Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.913743 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.917789 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" event={"ID":"d4a45fcf-70a0-4fc6-a592-f2c588936b3c","Type":"ContainerStarted","Data":"7125c6979cb3b5e596b9163c2db36d37d269b4b89d59202da289cd6405bf3b19"} Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.919671 4762 generic.go:334] "Generic (PLEG): container finished" podID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerID="7e6b5f855284c8faaf5bb489e300ebfa2761eec1d5246980af64a6b87b735c97" exitCode=0 Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.919986 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" event={"ID":"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1","Type":"ContainerDied","Data":"7e6b5f855284c8faaf5bb489e300ebfa2761eec1d5246980af64a6b87b735c97"} Mar 08 00:50:04 crc kubenswrapper[4762]: I0308 00:50:04.934709 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7588678759-6jpjt" podStartSLOduration=2.934691584 podStartE2EDuration="2.934691584s" podCreationTimestamp="2026-03-08 00:50:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:50:04.931810546 +0000 UTC m=+1626.405954890" watchObservedRunningTime="2026-03-08 00:50:04.934691584 +0000 UTC m=+1626.408835928" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.642703 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.737725 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.739064 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nh8b\" (UniqueName: \"kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b\") pod \"1ec41a8c-126a-4fb2-972f-bad18afb2398\" (UID: \"1ec41a8c-126a-4fb2-972f-bad18afb2398\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.755922 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b" (OuterVolumeSpecName: "kube-api-access-5nh8b") pod "1ec41a8c-126a-4fb2-972f-bad18afb2398" (UID: "1ec41a8c-126a-4fb2-972f-bad18afb2398"). InnerVolumeSpecName "kube-api-access-5nh8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840404 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840649 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840699 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840714 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840793 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.840833 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tl7cj\" (UniqueName: \"kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj\") pod \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\" (UID: \"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1\") " Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.841334 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nh8b\" (UniqueName: \"kubernetes.io/projected/1ec41a8c-126a-4fb2-972f-bad18afb2398-kube-api-access-5nh8b\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.846374 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj" (OuterVolumeSpecName: "kube-api-access-tl7cj") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "kube-api-access-tl7cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.906074 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.915305 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.918972 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.921999 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.922058 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config" (OuterVolumeSpecName: "config") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.930364 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" (UID: "605d5ec2-5c55-4d38-8403-d2f2f19ab8e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.931897 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" event={"ID":"d4a45fcf-70a0-4fc6-a592-f2c588936b3c","Type":"ContainerStarted","Data":"6f517bc454787e739812899c77f67965495299ec621de47109d237b176476501"} Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.931979 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.934516 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.934506 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548850-fbqsx" event={"ID":"1ec41a8c-126a-4fb2-972f-bad18afb2398","Type":"ContainerDied","Data":"91f612945f25f4addd6a4f454cdc9aa2a27dad40656143f41ec26b50f798b84b"} Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.934801 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f612945f25f4addd6a4f454cdc9aa2a27dad40656143f41ec26b50f798b84b" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.937341 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" event={"ID":"605d5ec2-5c55-4d38-8403-d2f2f19ab8e1","Type":"ContainerDied","Data":"c9288d9c573b57c5c167479e1b613d233b48e79a265a7224a5defd05d760f22d"} Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.937390 4762 scope.go:117] "RemoveContainer" containerID="7e6b5f855284c8faaf5bb489e300ebfa2761eec1d5246980af64a6b87b735c97" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.937359 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-8wmgv" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.942469 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-74bd49b68d-nxdxh" event={"ID":"08f3408b-7c06-4574-b984-a4bd2ee0d99f","Type":"ContainerStarted","Data":"66907a2e35f6e85464aa7f3c0e41005b307073ac2ffd14f93be42287b950b1c5"} Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943382 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943724 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943871 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943893 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943906 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943915 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tl7cj\" (UniqueName: \"kubernetes.io/projected/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-kube-api-access-tl7cj\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.943924 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1-config\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.973880 4762 scope.go:117] "RemoveContainer" containerID="103ce4e829d702a15656cd916a4eda79a07a82ed6d00daffe4e8e19586018014" Mar 08 00:50:05 crc kubenswrapper[4762]: I0308 00:50:05.977630 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" podStartSLOduration=2.442627147 podStartE2EDuration="3.977604542s" podCreationTimestamp="2026-03-08 00:50:02 +0000 UTC" firstStartedPulling="2026-03-08 00:50:03.927836776 +0000 UTC m=+1625.401981120" lastFinishedPulling="2026-03-08 00:50:05.462814171 +0000 UTC m=+1626.936958515" observedRunningTime="2026-03-08 00:50:05.954353882 +0000 UTC m=+1627.428498236" watchObservedRunningTime="2026-03-08 00:50:05.977604542 +0000 UTC m=+1627.451748886" Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.026283 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-74bd49b68d-nxdxh" podStartSLOduration=2.299892336 podStartE2EDuration="4.026263599s" podCreationTimestamp="2026-03-08 00:50:02 +0000 UTC" firstStartedPulling="2026-03-08 00:50:03.722556184 +0000 UTC m=+1625.196700538" lastFinishedPulling="2026-03-08 00:50:05.448927457 +0000 UTC m=+1626.923071801" observedRunningTime="2026-03-08 00:50:05.98764887 +0000 UTC m=+1627.461793214" watchObservedRunningTime="2026-03-08 00:50:06.026263599 +0000 UTC m=+1627.500407943" Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.037839 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.050963 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-8wmgv"] Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.726501 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548844-22sqx"] Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.741795 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548844-22sqx"] Mar 08 00:50:06 crc kubenswrapper[4762]: I0308 00:50:06.958791 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:07 crc kubenswrapper[4762]: I0308 00:50:07.294582 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="448af2de-e96f-4db2-a811-66c9805f7f34" path="/var/lib/kubelet/pods/448af2de-e96f-4db2-a811-66c9805f7f34/volumes" Mar 08 00:50:07 crc kubenswrapper[4762]: I0308 00:50:07.300470 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" path="/var/lib/kubelet/pods/605d5ec2-5c55-4d38-8403-d2f2f19ab8e1/volumes" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.762621 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk"] Mar 08 00:50:12 crc kubenswrapper[4762]: E0308 00:50:12.763444 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec41a8c-126a-4fb2-972f-bad18afb2398" containerName="oc" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.763460 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec41a8c-126a-4fb2-972f-bad18afb2398" containerName="oc" Mar 08 00:50:12 crc kubenswrapper[4762]: E0308 00:50:12.763495 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="init" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.763503 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="init" Mar 08 00:50:12 crc kubenswrapper[4762]: E0308 00:50:12.763533 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="dnsmasq-dns" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.763544 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="dnsmasq-dns" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.763776 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec41a8c-126a-4fb2-972f-bad18afb2398" containerName="oc" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.763794 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="605d5ec2-5c55-4d38-8403-d2f2f19ab8e1" containerName="dnsmasq-dns" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.764703 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.767272 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.767307 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.767532 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.767685 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.790388 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk"] Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.911988 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.912047 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskdc\" (UniqueName: \"kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.912107 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:12 crc kubenswrapper[4762]: I0308 00:50:12.912176 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.015001 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.015149 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.015440 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.015498 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskdc\" (UniqueName: \"kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.021207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.022046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.022661 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.036626 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskdc\" (UniqueName: \"kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.091179 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.242814 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7588678759-6jpjt" Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.342397 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.342671 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-68f674dbc4-pt9pq" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" containerID="cri-o://88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" gracePeriod=60 Mar 08 00:50:13 crc kubenswrapper[4762]: I0308 00:50:13.707128 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk"] Mar 08 00:50:14 crc kubenswrapper[4762]: I0308 00:50:14.100912 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" event={"ID":"411b0d7b-2d67-4965-adc7-386c2a0a4e69","Type":"ContainerStarted","Data":"964365bca44b4ed45a43a008cd2f96844a65e75330708d2d7d7b4baf80237110"} Mar 08 00:50:14 crc kubenswrapper[4762]: I0308 00:50:14.623210 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-74bd49b68d-nxdxh" Mar 08 00:50:14 crc kubenswrapper[4762]: I0308 00:50:14.690910 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:50:14 crc kubenswrapper[4762]: I0308 00:50:14.691155 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-557f98fcf9-zd48x" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" containerID="cri-o://618f3d2692343fc3d57031da6624abc6e396b4c62df06f4a04080be79617bd32" gracePeriod=60 Mar 08 00:50:15 crc kubenswrapper[4762]: E0308 00:50:15.131639 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:15 crc kubenswrapper[4762]: E0308 00:50:15.133423 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:15 crc kubenswrapper[4762]: E0308 00:50:15.135070 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:15 crc kubenswrapper[4762]: E0308 00:50:15.135187 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-68f674dbc4-pt9pq" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" Mar 08 00:50:15 crc kubenswrapper[4762]: I0308 00:50:15.172211 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5f957556fb-8j7fl" Mar 08 00:50:15 crc kubenswrapper[4762]: I0308 00:50:15.275957 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:50:15 crc kubenswrapper[4762]: I0308 00:50:15.276177 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-66558fbb95-w94pj" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" containerID="cri-o://8def5b71e03a34a0f8e1f0cc35f9f5fa3a17bfc26bffde27d3fa5c9f1cabcce7" gracePeriod=60 Mar 08 00:50:17 crc kubenswrapper[4762]: I0308 00:50:17.852905 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-557f98fcf9-zd48x" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.222:8004/healthcheck\": read tcp 10.217.0.2:39790->10.217.0.222:8004: read: connection reset by peer" Mar 08 00:50:18 crc kubenswrapper[4762]: I0308 00:50:18.154526 4762 generic.go:334] "Generic (PLEG): container finished" podID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerID="618f3d2692343fc3d57031da6624abc6e396b4c62df06f4a04080be79617bd32" exitCode=0 Mar 08 00:50:18 crc kubenswrapper[4762]: I0308 00:50:18.154596 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f98fcf9-zd48x" event={"ID":"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae","Type":"ContainerDied","Data":"618f3d2692343fc3d57031da6624abc6e396b4c62df06f4a04080be79617bd32"} Mar 08 00:50:18 crc kubenswrapper[4762]: I0308 00:50:18.225912 4762 scope.go:117] "RemoveContainer" containerID="d3aa62d5919c60e110b67b582ffbc719a3bee2abee9348dfc2f4b323d33bec97" Mar 08 00:50:18 crc kubenswrapper[4762]: I0308 00:50:18.439470 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66558fbb95-w94pj" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.223:8000/healthcheck\": read tcp 10.217.0.2:60944->10.217.0.223:8000: read: connection reset by peer" Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.177683 4762 generic.go:334] "Generic (PLEG): container finished" podID="cea7862c-6515-43de-826c-87e285980ca0" containerID="e500514b5ba4d6d2155370fc96579af7a0e7b30ae364ad245c503e15c1ba5d16" exitCode=0 Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.177831 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cea7862c-6515-43de-826c-87e285980ca0","Type":"ContainerDied","Data":"e500514b5ba4d6d2155370fc96579af7a0e7b30ae364ad245c503e15c1ba5d16"} Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.181273 4762 generic.go:334] "Generic (PLEG): container finished" podID="0c28709c-824a-4d55-9741-7d37406ae689" containerID="8def5b71e03a34a0f8e1f0cc35f9f5fa3a17bfc26bffde27d3fa5c9f1cabcce7" exitCode=0 Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.181318 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66558fbb95-w94pj" event={"ID":"0c28709c-824a-4d55-9741-7d37406ae689","Type":"ContainerDied","Data":"8def5b71e03a34a0f8e1f0cc35f9f5fa3a17bfc26bffde27d3fa5c9f1cabcce7"} Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.827172 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-snqd5"] Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.837478 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-snqd5"] Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.953310 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-tl48n"] Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.957733 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.960894 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 08 00:50:19 crc kubenswrapper[4762]: I0308 00:50:19.969391 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-tl48n"] Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.001699 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.002278 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.002347 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhw8g\" (UniqueName: \"kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.002431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.103977 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.104018 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhw8g\" (UniqueName: \"kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.104047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.104122 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.110573 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.113744 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.118126 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.119625 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhw8g\" (UniqueName: \"kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g\") pod \"aodh-db-sync-tl48n\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.195360 4762 generic.go:334] "Generic (PLEG): container finished" podID="83567ea1-f607-4be2-b0af-6d09bcf74e06" containerID="0144162d6e612bcc4933713908f44d044e74c82558fcc40a947791846f1dc939" exitCode=0 Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.195400 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83567ea1-f607-4be2-b0af-6d09bcf74e06","Type":"ContainerDied","Data":"0144162d6e612bcc4933713908f44d044e74c82558fcc40a947791846f1dc939"} Mar 08 00:50:20 crc kubenswrapper[4762]: I0308 00:50:20.274176 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:21 crc kubenswrapper[4762]: I0308 00:50:21.278109 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd20c5d8-125b-4519-be85-bd0b7d23c141" path="/var/lib/kubelet/pods/cd20c5d8-125b-4519-be85-bd0b7d23c141/volumes" Mar 08 00:50:22 crc kubenswrapper[4762]: I0308 00:50:22.759071 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-557f98fcf9-zd48x" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.222:8004/healthcheck\": dial tcp 10.217.0.222:8004: connect: connection refused" Mar 08 00:50:22 crc kubenswrapper[4762]: I0308 00:50:22.777277 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-66558fbb95-w94pj" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.223:8000/healthcheck\": dial tcp 10.217.0.223:8000: connect: connection refused" Mar 08 00:50:23 crc kubenswrapper[4762]: I0308 00:50:23.514850 4762 scope.go:117] "RemoveContainer" containerID="9158efa9b78f64d18ff46cf34e7e957a248e1599d62fb76715962dab0440e051" Mar 08 00:50:23 crc kubenswrapper[4762]: I0308 00:50:23.750951 4762 scope.go:117] "RemoveContainer" containerID="f83b11436a82b08be21c4d8c66576da56f79d198bc5395ff870d1315def214c9" Mar 08 00:50:23 crc kubenswrapper[4762]: I0308 00:50:23.987398 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.030259 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087521 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087615 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087684 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087869 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087893 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpwlt\" (UniqueName: \"kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087931 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087958 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087975 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kklmh\" (UniqueName: \"kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.087991 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs\") pod \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\" (UID: \"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.088061 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.088086 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.088107 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle\") pod \"0c28709c-824a-4d55-9741-7d37406ae689\" (UID: \"0c28709c-824a-4d55-9741-7d37406ae689\") " Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.110127 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.121286 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.135393 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt" (OuterVolumeSpecName: "kube-api-access-dpwlt") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "kube-api-access-dpwlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.155936 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh" (OuterVolumeSpecName: "kube-api-access-kklmh") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "kube-api-access-kklmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: W0308 00:50:24.171945 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda277fe04_ceb8_40ca_aa94_8c1f440cf7c9.slice/crio-9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d WatchSource:0}: Error finding container 9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d: Status 404 returned error can't find the container with id 9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.193317 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.194621 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpwlt\" (UniqueName: \"kubernetes.io/projected/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-kube-api-access-dpwlt\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.194802 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.194862 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kklmh\" (UniqueName: \"kubernetes.io/projected/0c28709c-824a-4d55-9741-7d37406ae689-kube-api-access-kklmh\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.208080 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.209061 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-tl48n"] Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.210917 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.213891 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.234233 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.244092 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data" (OuterVolumeSpecName: "config-data") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.249906 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.252926 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data" (OuterVolumeSpecName: "config-data") pod "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" (UID: "07b2a65f-ba23-4ca0-991d-9e4e3c3458ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.253116 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tl48n" event={"ID":"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9","Type":"ContainerStarted","Data":"9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.263351 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"83567ea1-f607-4be2-b0af-6d09bcf74e06","Type":"ContainerStarted","Data":"16d4a51e1dbef111371b6eb65131421f5fad27cb6ea1d86117a0e34311f3cbdc"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.264825 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.278043 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"cea7862c-6515-43de-826c-87e285980ca0","Type":"ContainerStarted","Data":"d51ea7a6c52a6e9f5594e1ea7f4f4f1dd5f8f01b920087cd56e12fba258945b5"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.279325 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.292959 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" event={"ID":"411b0d7b-2d67-4965-adc7-386c2a0a4e69","Type":"ContainerStarted","Data":"fef10f330f5c8970c7c76876467aa0318eb2405db2cbcea6f9cf2e9df57839cb"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.296385 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0c28709c-824a-4d55-9741-7d37406ae689" (UID: "0c28709c-824a-4d55-9741-7d37406ae689"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.296493 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-66558fbb95-w94pj" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.296505 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-66558fbb95-w94pj" event={"ID":"0c28709c-824a-4d55-9741-7d37406ae689","Type":"ContainerDied","Data":"b9f80df0b0be50276cbc0674ab7163bdb45183dac6ec11e1c6476cf5bca65253"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.296566 4762 scope.go:117] "RemoveContainer" containerID="8def5b71e03a34a0f8e1f0cc35f9f5fa3a17bfc26bffde27d3fa5c9f1cabcce7" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304492 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304520 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304530 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304542 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304551 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304561 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c28709c-824a-4d55-9741-7d37406ae689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304569 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.304578 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.310470 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.31044968 podStartE2EDuration="48.31044968s" podCreationTimestamp="2026-03-08 00:49:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:50:24.302532999 +0000 UTC m=+1645.776677343" watchObservedRunningTime="2026-03-08 00:50:24.31044968 +0000 UTC m=+1645.784594024" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.312167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-557f98fcf9-zd48x" event={"ID":"07b2a65f-ba23-4ca0-991d-9e4e3c3458ae","Type":"ContainerDied","Data":"e9362912d8a8f7c1a000784471b89f59f93b374c7d5d2fa58fd31941c417103c"} Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.312249 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-557f98fcf9-zd48x" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.341110 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.341095847 podStartE2EDuration="42.341095847s" podCreationTimestamp="2026-03-08 00:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 00:50:24.339746615 +0000 UTC m=+1645.813890959" watchObservedRunningTime="2026-03-08 00:50:24.341095847 +0000 UTC m=+1645.815240191" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.344066 4762 scope.go:117] "RemoveContainer" containerID="618f3d2692343fc3d57031da6624abc6e396b4c62df06f4a04080be79617bd32" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.363960 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" podStartSLOduration=2.463895204 podStartE2EDuration="12.363742319s" podCreationTimestamp="2026-03-08 00:50:12 +0000 UTC" firstStartedPulling="2026-03-08 00:50:13.709076258 +0000 UTC m=+1635.183220602" lastFinishedPulling="2026-03-08 00:50:23.608923373 +0000 UTC m=+1645.083067717" observedRunningTime="2026-03-08 00:50:24.355963751 +0000 UTC m=+1645.830108095" watchObservedRunningTime="2026-03-08 00:50:24.363742319 +0000 UTC m=+1645.837886663" Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.394906 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.425049 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-66558fbb95-w94pj"] Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.435824 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.447173 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-557f98fcf9-zd48x"] Mar 08 00:50:24 crc kubenswrapper[4762]: I0308 00:50:24.616669 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 08 00:50:25 crc kubenswrapper[4762]: E0308 00:50:25.133062 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:25 crc kubenswrapper[4762]: E0308 00:50:25.134730 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:25 crc kubenswrapper[4762]: E0308 00:50:25.136099 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 08 00:50:25 crc kubenswrapper[4762]: E0308 00:50:25.136136 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-68f674dbc4-pt9pq" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" Mar 08 00:50:25 crc kubenswrapper[4762]: I0308 00:50:25.279962 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" path="/var/lib/kubelet/pods/07b2a65f-ba23-4ca0-991d-9e4e3c3458ae/volumes" Mar 08 00:50:25 crc kubenswrapper[4762]: I0308 00:50:25.280497 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c28709c-824a-4d55-9741-7d37406ae689" path="/var/lib/kubelet/pods/0c28709c-824a-4d55-9741-7d37406ae689/volumes" Mar 08 00:50:31 crc kubenswrapper[4762]: I0308 00:50:31.392894 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tl48n" event={"ID":"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9","Type":"ContainerStarted","Data":"65dc75ac130668727c2741affea466494afd2139ee82688665d1d07f4658d61f"} Mar 08 00:50:31 crc kubenswrapper[4762]: I0308 00:50:31.443934 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-tl48n" podStartSLOduration=5.880787819 podStartE2EDuration="12.443909192s" podCreationTimestamp="2026-03-08 00:50:19 +0000 UTC" firstStartedPulling="2026-03-08 00:50:24.199101678 +0000 UTC m=+1645.673246022" lastFinishedPulling="2026-03-08 00:50:30.762223051 +0000 UTC m=+1652.236367395" observedRunningTime="2026-03-08 00:50:31.409533561 +0000 UTC m=+1652.883677905" watchObservedRunningTime="2026-03-08 00:50:31.443909192 +0000 UTC m=+1652.918053536" Mar 08 00:50:33 crc kubenswrapper[4762]: I0308 00:50:33.271586 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="cea7862c-6515-43de-826c-87e285980ca0" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.11:5671: connect: connection refused" Mar 08 00:50:33 crc kubenswrapper[4762]: I0308 00:50:33.418833 4762 generic.go:334] "Generic (PLEG): container finished" podID="a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" containerID="65dc75ac130668727c2741affea466494afd2139ee82688665d1d07f4658d61f" exitCode=0 Mar 08 00:50:33 crc kubenswrapper[4762]: I0308 00:50:33.418882 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tl48n" event={"ID":"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9","Type":"ContainerDied","Data":"65dc75ac130668727c2741affea466494afd2139ee82688665d1d07f4658d61f"} Mar 08 00:50:33 crc kubenswrapper[4762]: I0308 00:50:33.988090 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.081643 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle\") pod \"e2965295-b595-4655-80c6-daa506d337c7\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.081800 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom\") pod \"e2965295-b595-4655-80c6-daa506d337c7\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.081867 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njmm6\" (UniqueName: \"kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6\") pod \"e2965295-b595-4655-80c6-daa506d337c7\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.082019 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data\") pod \"e2965295-b595-4655-80c6-daa506d337c7\" (UID: \"e2965295-b595-4655-80c6-daa506d337c7\") " Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.086695 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6" (OuterVolumeSpecName: "kube-api-access-njmm6") pod "e2965295-b595-4655-80c6-daa506d337c7" (UID: "e2965295-b595-4655-80c6-daa506d337c7"). InnerVolumeSpecName "kube-api-access-njmm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.086694 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e2965295-b595-4655-80c6-daa506d337c7" (UID: "e2965295-b595-4655-80c6-daa506d337c7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.111701 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2965295-b595-4655-80c6-daa506d337c7" (UID: "e2965295-b595-4655-80c6-daa506d337c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.154332 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data" (OuterVolumeSpecName: "config-data") pod "e2965295-b595-4655-80c6-daa506d337c7" (UID: "e2965295-b595-4655-80c6-daa506d337c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.184098 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.184144 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.184167 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2965295-b595-4655-80c6-daa506d337c7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.184186 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njmm6\" (UniqueName: \"kubernetes.io/projected/e2965295-b595-4655-80c6-daa506d337c7-kube-api-access-njmm6\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.436160 4762 generic.go:334] "Generic (PLEG): container finished" podID="e2965295-b595-4655-80c6-daa506d337c7" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" exitCode=0 Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.436233 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68f674dbc4-pt9pq" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.436225 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68f674dbc4-pt9pq" event={"ID":"e2965295-b595-4655-80c6-daa506d337c7","Type":"ContainerDied","Data":"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b"} Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.436409 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68f674dbc4-pt9pq" event={"ID":"e2965295-b595-4655-80c6-daa506d337c7","Type":"ContainerDied","Data":"991c2f25f8ab7cda162a82d47d414919eaf365482a10972e582943627ee61a2c"} Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.436449 4762 scope.go:117] "RemoveContainer" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.487671 4762 scope.go:117] "RemoveContainer" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" Mar 08 00:50:34 crc kubenswrapper[4762]: E0308 00:50:34.489223 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b\": container with ID starting with 88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b not found: ID does not exist" containerID="88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.489297 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b"} err="failed to get container status \"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b\": rpc error: code = NotFound desc = could not find container \"88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b\": container with ID starting with 88db5d25d51ed0eb3aca78161fcf75241c0c1a68e46b1328b26d102ea097803b not found: ID does not exist" Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.501297 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.510690 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-68f674dbc4-pt9pq"] Mar 08 00:50:34 crc kubenswrapper[4762]: I0308 00:50:34.941295 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.008978 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhw8g\" (UniqueName: \"kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g\") pod \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.009087 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts\") pod \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.009151 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle\") pod \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.009240 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data\") pod \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\" (UID: \"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9\") " Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.017012 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g" (OuterVolumeSpecName: "kube-api-access-nhw8g") pod "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" (UID: "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9"). InnerVolumeSpecName "kube-api-access-nhw8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.017108 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts" (OuterVolumeSpecName: "scripts") pod "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" (UID: "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.055381 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data" (OuterVolumeSpecName: "config-data") pod "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" (UID: "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.066302 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" (UID: "a277fe04-ceb8-40ca-aa94-8c1f440cf7c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.112308 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhw8g\" (UniqueName: \"kubernetes.io/projected/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-kube-api-access-nhw8g\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.112342 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.112358 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.112369 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.290624 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2965295-b595-4655-80c6-daa506d337c7" path="/var/lib/kubelet/pods/e2965295-b595-4655-80c6-daa506d337c7/volumes" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.459111 4762 generic.go:334] "Generic (PLEG): container finished" podID="411b0d7b-2d67-4965-adc7-386c2a0a4e69" containerID="fef10f330f5c8970c7c76876467aa0318eb2405db2cbcea6f9cf2e9df57839cb" exitCode=0 Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.459892 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" event={"ID":"411b0d7b-2d67-4965-adc7-386c2a0a4e69","Type":"ContainerDied","Data":"fef10f330f5c8970c7c76876467aa0318eb2405db2cbcea6f9cf2e9df57839cb"} Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.465439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-tl48n" event={"ID":"a277fe04-ceb8-40ca-aa94-8c1f440cf7c9","Type":"ContainerDied","Data":"9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d"} Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.465505 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fdfc190d4f1bb8feb9ece630b4118956b3d469b2bcab904eeed469fa10d0c0d" Mar 08 00:50:35 crc kubenswrapper[4762]: I0308 00:50:35.465538 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-tl48n" Mar 08 00:50:36 crc kubenswrapper[4762]: I0308 00:50:36.793090 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 08 00:50:36 crc kubenswrapper[4762]: I0308 00:50:36.960730 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.057156 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.057514 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-api" containerID="cri-o://209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" gracePeriod=30 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.057550 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-listener" containerID="cri-o://183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" gracePeriod=30 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.057591 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-notifier" containerID="cri-o://e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" gracePeriod=30 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.057656 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-evaluator" containerID="cri-o://fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" gracePeriod=30 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.073723 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle\") pod \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.073835 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam\") pod \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.073917 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory\") pod \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.074006 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hskdc\" (UniqueName: \"kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc\") pod \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\" (UID: \"411b0d7b-2d67-4965-adc7-386c2a0a4e69\") " Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.085037 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc" (OuterVolumeSpecName: "kube-api-access-hskdc") pod "411b0d7b-2d67-4965-adc7-386c2a0a4e69" (UID: "411b0d7b-2d67-4965-adc7-386c2a0a4e69"). InnerVolumeSpecName "kube-api-access-hskdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.086178 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "411b0d7b-2d67-4965-adc7-386c2a0a4e69" (UID: "411b0d7b-2d67-4965-adc7-386c2a0a4e69"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.110960 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "411b0d7b-2d67-4965-adc7-386c2a0a4e69" (UID: "411b0d7b-2d67-4965-adc7-386c2a0a4e69"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.128994 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory" (OuterVolumeSpecName: "inventory") pod "411b0d7b-2d67-4965-adc7-386c2a0a4e69" (UID: "411b0d7b-2d67-4965-adc7-386c2a0a4e69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.176493 4762 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.176527 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.176541 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/411b0d7b-2d67-4965-adc7-386c2a0a4e69-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.176551 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hskdc\" (UniqueName: \"kubernetes.io/projected/411b0d7b-2d67-4965-adc7-386c2a0a4e69-kube-api-access-hskdc\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.500373 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" event={"ID":"411b0d7b-2d67-4965-adc7-386c2a0a4e69","Type":"ContainerDied","Data":"964365bca44b4ed45a43a008cd2f96844a65e75330708d2d7d7b4baf80237110"} Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.500411 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="964365bca44b4ed45a43a008cd2f96844a65e75330708d2d7d7b4baf80237110" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.500407 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.503886 4762 generic.go:334] "Generic (PLEG): container finished" podID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerID="fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" exitCode=0 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.503915 4762 generic.go:334] "Generic (PLEG): container finished" podID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerID="209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" exitCode=0 Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.503935 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerDied","Data":"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593"} Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.503963 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerDied","Data":"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c"} Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.576375 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8"] Mar 08 00:50:37 crc kubenswrapper[4762]: E0308 00:50:37.576885 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.576903 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" Mar 08 00:50:37 crc kubenswrapper[4762]: E0308 00:50:37.576918 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" containerName="aodh-db-sync" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.576928 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" containerName="aodh-db-sync" Mar 08 00:50:37 crc kubenswrapper[4762]: E0308 00:50:37.576950 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411b0d7b-2d67-4965-adc7-386c2a0a4e69" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.576960 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="411b0d7b-2d67-4965-adc7-386c2a0a4e69" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 00:50:37 crc kubenswrapper[4762]: E0308 00:50:37.576981 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.576988 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" Mar 08 00:50:37 crc kubenswrapper[4762]: E0308 00:50:37.577005 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577013 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577256 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="411b0d7b-2d67-4965-adc7-386c2a0a4e69" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577287 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c28709c-824a-4d55-9741-7d37406ae689" containerName="heat-cfnapi" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577305 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" containerName="aodh-db-sync" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577321 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b2a65f-ba23-4ca0-991d-9e4e3c3458ae" containerName="heat-api" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.577334 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2965295-b595-4655-80c6-daa506d337c7" containerName="heat-engine" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.578279 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.583469 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.583513 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.583982 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.584010 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.590277 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8"] Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.686632 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.686782 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.686860 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h54j\" (UniqueName: \"kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.687004 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.789589 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.789730 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h54j\" (UniqueName: \"kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.789803 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.789889 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.793972 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.794059 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.794558 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.807331 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h54j\" (UniqueName: \"kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:37 crc kubenswrapper[4762]: I0308 00:50:37.926669 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:50:38 crc kubenswrapper[4762]: I0308 00:50:38.508345 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8"] Mar 08 00:50:38 crc kubenswrapper[4762]: W0308 00:50:38.513249 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd66bfa70_7ce9_4bd3_9ca5_43f7cc9ccd95.slice/crio-aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79 WatchSource:0}: Error finding container aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79: Status 404 returned error can't find the container with id aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79 Mar 08 00:50:39 crc kubenswrapper[4762]: I0308 00:50:39.545130 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" event={"ID":"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95","Type":"ContainerStarted","Data":"aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79"} Mar 08 00:50:40 crc kubenswrapper[4762]: I0308 00:50:40.565024 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" event={"ID":"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95","Type":"ContainerStarted","Data":"9f20ee537496d518f89b00f6e185be15e6da16a3af71dfefefc8aef0a5315496"} Mar 08 00:50:40 crc kubenswrapper[4762]: I0308 00:50:40.598930 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" podStartSLOduration=2.612136253 podStartE2EDuration="3.598902717s" podCreationTimestamp="2026-03-08 00:50:37 +0000 UTC" firstStartedPulling="2026-03-08 00:50:38.518315708 +0000 UTC m=+1659.992460062" lastFinishedPulling="2026-03-08 00:50:39.505082142 +0000 UTC m=+1660.979226526" observedRunningTime="2026-03-08 00:50:40.58920546 +0000 UTC m=+1662.063349814" watchObservedRunningTime="2026-03-08 00:50:40.598902717 +0000 UTC m=+1662.073047091" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.335789 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.469718 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.469809 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.469891 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.469922 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.469963 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggdqd\" (UniqueName: \"kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.470187 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle\") pod \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\" (UID: \"14f69ade-3262-4bc2-9c8d-f17ebeabfce2\") " Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.478888 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd" (OuterVolumeSpecName: "kube-api-access-ggdqd") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "kube-api-access-ggdqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.478931 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts" (OuterVolumeSpecName: "scripts") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.537052 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.575399 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.575427 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.575437 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggdqd\" (UniqueName: \"kubernetes.io/projected/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-kube-api-access-ggdqd\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579414 4762 generic.go:334] "Generic (PLEG): container finished" podID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerID="183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" exitCode=0 Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579443 4762 generic.go:334] "Generic (PLEG): container finished" podID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerID="e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" exitCode=0 Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579476 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579486 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerDied","Data":"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18"} Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579558 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerDied","Data":"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a"} Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579574 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"14f69ade-3262-4bc2-9c8d-f17ebeabfce2","Type":"ContainerDied","Data":"64199a12e6195bd56edf4c910e497f0eb8ac34066ebf37ff2d114f36827dca07"} Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.579595 4762 scope.go:117] "RemoveContainer" containerID="183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.586042 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.611645 4762 scope.go:117] "RemoveContainer" containerID="e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.615804 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.632371 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data" (OuterVolumeSpecName: "config-data") pod "14f69ade-3262-4bc2-9c8d-f17ebeabfce2" (UID: "14f69ade-3262-4bc2-9c8d-f17ebeabfce2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.640287 4762 scope.go:117] "RemoveContainer" containerID="fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.669219 4762 scope.go:117] "RemoveContainer" containerID="209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.676901 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.676923 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.676932 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14f69ade-3262-4bc2-9c8d-f17ebeabfce2-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.690340 4762 scope.go:117] "RemoveContainer" containerID="183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.690752 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18\": container with ID starting with 183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18 not found: ID does not exist" containerID="183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.690820 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18"} err="failed to get container status \"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18\": rpc error: code = NotFound desc = could not find container \"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18\": container with ID starting with 183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18 not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.690841 4762 scope.go:117] "RemoveContainer" containerID="e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.691107 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a\": container with ID starting with e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a not found: ID does not exist" containerID="e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691130 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a"} err="failed to get container status \"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a\": rpc error: code = NotFound desc = could not find container \"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a\": container with ID starting with e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691146 4762 scope.go:117] "RemoveContainer" containerID="fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.691458 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593\": container with ID starting with fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593 not found: ID does not exist" containerID="fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691517 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593"} err="failed to get container status \"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593\": rpc error: code = NotFound desc = could not find container \"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593\": container with ID starting with fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593 not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691555 4762 scope.go:117] "RemoveContainer" containerID="209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.691900 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c\": container with ID starting with 209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c not found: ID does not exist" containerID="209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691923 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c"} err="failed to get container status \"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c\": rpc error: code = NotFound desc = could not find container \"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c\": container with ID starting with 209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.691935 4762 scope.go:117] "RemoveContainer" containerID="183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692180 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18"} err="failed to get container status \"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18\": rpc error: code = NotFound desc = could not find container \"183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18\": container with ID starting with 183715328bd908d66b3ed7c8243f27ed638c9bd974448e421e5cb434e0990e18 not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692196 4762 scope.go:117] "RemoveContainer" containerID="e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692473 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a"} err="failed to get container status \"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a\": rpc error: code = NotFound desc = could not find container \"e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a\": container with ID starting with e0bb575e7a422d6543411b568aacd7094a31906ea555254b6cf2b60ede6c4a4a not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692497 4762 scope.go:117] "RemoveContainer" containerID="fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692685 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593"} err="failed to get container status \"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593\": rpc error: code = NotFound desc = could not find container \"fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593\": container with ID starting with fdc5a571177a576d04377577257d02d5cc1e29303e87cbaa739066819ed00593 not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692709 4762 scope.go:117] "RemoveContainer" containerID="209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.692893 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c"} err="failed to get container status \"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c\": rpc error: code = NotFound desc = could not find container \"209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c\": container with ID starting with 209c6ad0b4d696ff302b4c72d20aae9ddb27c2d2cf52883cd1557cee3fd5440c not found: ID does not exist" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.920020 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.948521 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.967566 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.968290 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-notifier" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968320 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-notifier" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.968348 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-listener" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968363 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-listener" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.968404 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-api" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968419 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-api" Mar 08 00:50:41 crc kubenswrapper[4762]: E0308 00:50:41.968441 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-evaluator" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968454 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-evaluator" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968830 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-api" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968880 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-listener" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968920 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-notifier" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.968946 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" containerName="aodh-evaluator" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.972403 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.973998 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-kqtwz" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.974319 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.975521 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.976029 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.980425 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 08 00:50:41 crc kubenswrapper[4762]: I0308 00:50:41.983209 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.085265 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-public-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.085374 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.085433 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-config-data\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.085529 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.085804 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-scripts\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.086343 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjqvf\" (UniqueName: \"kubernetes.io/projected/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-kube-api-access-sjqvf\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.190722 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjqvf\" (UniqueName: \"kubernetes.io/projected/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-kube-api-access-sjqvf\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.190869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-public-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.190944 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.190994 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-config-data\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.191047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.191123 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-scripts\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.195739 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-internal-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.195858 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.196196 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-scripts\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.199175 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-public-tls-certs\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.200826 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-config-data\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.210867 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjqvf\" (UniqueName: \"kubernetes.io/projected/e7f80bd6-31fa-43d5-b6d5-6667fd1486e5-kube-api-access-sjqvf\") pod \"aodh-0\" (UID: \"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5\") " pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.295047 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 08 00:50:42 crc kubenswrapper[4762]: W0308 00:50:42.808351 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f80bd6_31fa_43d5_b6d5_6667fd1486e5.slice/crio-a3c6f2e0ee65c9a2a678499700aa76378e0b17caed8c39977c150ef86add3d02 WatchSource:0}: Error finding container a3c6f2e0ee65c9a2a678499700aa76378e0b17caed8c39977c150ef86add3d02: Status 404 returned error can't find the container with id a3c6f2e0ee65c9a2a678499700aa76378e0b17caed8c39977c150ef86add3d02 Mar 08 00:50:42 crc kubenswrapper[4762]: I0308 00:50:42.826504 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 08 00:50:43 crc kubenswrapper[4762]: I0308 00:50:43.247007 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 08 00:50:43 crc kubenswrapper[4762]: I0308 00:50:43.278362 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f69ade-3262-4bc2-9c8d-f17ebeabfce2" path="/var/lib/kubelet/pods/14f69ade-3262-4bc2-9c8d-f17ebeabfce2/volumes" Mar 08 00:50:43 crc kubenswrapper[4762]: I0308 00:50:43.611948 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5","Type":"ContainerStarted","Data":"0625d2a9b6af3c947700dc392a892b0e2c25de4db1866f755aa5a50448b19d55"} Mar 08 00:50:43 crc kubenswrapper[4762]: I0308 00:50:43.611999 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5","Type":"ContainerStarted","Data":"a3c6f2e0ee65c9a2a678499700aa76378e0b17caed8c39977c150ef86add3d02"} Mar 08 00:50:44 crc kubenswrapper[4762]: I0308 00:50:44.625503 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5","Type":"ContainerStarted","Data":"380c945cca61e1a652cfdeda397cd0ab27f3be37a9f10b01d310ff18c30d81ee"} Mar 08 00:50:45 crc kubenswrapper[4762]: I0308 00:50:45.654149 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5","Type":"ContainerStarted","Data":"a29403715032cc49616e3badab3d9625a9d30e56083f9e021beb4c34dd2212dd"} Mar 08 00:50:47 crc kubenswrapper[4762]: I0308 00:50:47.715859 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e7f80bd6-31fa-43d5-b6d5-6667fd1486e5","Type":"ContainerStarted","Data":"6d42d9d6bacc372c0b8e001a5b4f9fb9e4e18a297b6d286f53bbb1fdbbe8e6aa"} Mar 08 00:50:47 crc kubenswrapper[4762]: I0308 00:50:47.759103 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.149266758 podStartE2EDuration="6.759080285s" podCreationTimestamp="2026-03-08 00:50:41 +0000 UTC" firstStartedPulling="2026-03-08 00:50:42.811904621 +0000 UTC m=+1664.286048985" lastFinishedPulling="2026-03-08 00:50:46.421718168 +0000 UTC m=+1667.895862512" observedRunningTime="2026-03-08 00:50:47.744945643 +0000 UTC m=+1669.219089997" watchObservedRunningTime="2026-03-08 00:50:47.759080285 +0000 UTC m=+1669.233224639" Mar 08 00:51:24 crc kubenswrapper[4762]: I0308 00:51:24.222089 4762 scope.go:117] "RemoveContainer" containerID="5bc0148767a036e46719269d2969ea316d93c9670ce8be1ee742bb2df15ab7fd" Mar 08 00:51:24 crc kubenswrapper[4762]: I0308 00:51:24.265285 4762 scope.go:117] "RemoveContainer" containerID="9981db043aa2dde646fea769431ba4acc80349a1d84fbadfe09ccb3c34c5620e" Mar 08 00:51:24 crc kubenswrapper[4762]: I0308 00:51:24.369005 4762 scope.go:117] "RemoveContainer" containerID="0f2f9d8dbf21fd8bfcdf8b0129c54ed1381f9c1ec33a9a01ecbe25f718a918ad" Mar 08 00:51:24 crc kubenswrapper[4762]: I0308 00:51:24.425784 4762 scope.go:117] "RemoveContainer" containerID="78390e68e43b110eb2c6b6e7feb99eeadc532ef8ad8b7628fabfb5d3436d4c91" Mar 08 00:51:42 crc kubenswrapper[4762]: I0308 00:51:42.851557 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:51:42 crc kubenswrapper[4762]: I0308 00:51:42.852208 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.166561 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548852-x2bd8"] Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.168993 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.173539 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.173856 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.174024 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.182722 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548852-x2bd8"] Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.265905 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2csw\" (UniqueName: \"kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw\") pod \"auto-csr-approver-29548852-x2bd8\" (UID: \"9ddcaece-8907-4f18-b7a7-ab8def2796cd\") " pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.368511 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2csw\" (UniqueName: \"kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw\") pod \"auto-csr-approver-29548852-x2bd8\" (UID: \"9ddcaece-8907-4f18-b7a7-ab8def2796cd\") " pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.391209 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2csw\" (UniqueName: \"kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw\") pod \"auto-csr-approver-29548852-x2bd8\" (UID: \"9ddcaece-8907-4f18-b7a7-ab8def2796cd\") " pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:00 crc kubenswrapper[4762]: I0308 00:52:00.493281 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:01 crc kubenswrapper[4762]: I0308 00:52:01.031037 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548852-x2bd8"] Mar 08 00:52:01 crc kubenswrapper[4762]: I0308 00:52:01.737894 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" event={"ID":"9ddcaece-8907-4f18-b7a7-ab8def2796cd","Type":"ContainerStarted","Data":"48edec8e085ca3d7416e14c8d2cb3ccc78127c6c4dde5355ee72ee3c9192a669"} Mar 08 00:52:02 crc kubenswrapper[4762]: I0308 00:52:02.749742 4762 generic.go:334] "Generic (PLEG): container finished" podID="9ddcaece-8907-4f18-b7a7-ab8def2796cd" containerID="d036e39e7bd43f13739d4c1e24070d33cf6e969f28a898534c4d38a015f1f7f4" exitCode=0 Mar 08 00:52:02 crc kubenswrapper[4762]: I0308 00:52:02.749977 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" event={"ID":"9ddcaece-8907-4f18-b7a7-ab8def2796cd","Type":"ContainerDied","Data":"d036e39e7bd43f13739d4c1e24070d33cf6e969f28a898534c4d38a015f1f7f4"} Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.175076 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.256590 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2csw\" (UniqueName: \"kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw\") pod \"9ddcaece-8907-4f18-b7a7-ab8def2796cd\" (UID: \"9ddcaece-8907-4f18-b7a7-ab8def2796cd\") " Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.265941 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw" (OuterVolumeSpecName: "kube-api-access-b2csw") pod "9ddcaece-8907-4f18-b7a7-ab8def2796cd" (UID: "9ddcaece-8907-4f18-b7a7-ab8def2796cd"). InnerVolumeSpecName "kube-api-access-b2csw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.365335 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2csw\" (UniqueName: \"kubernetes.io/projected/9ddcaece-8907-4f18-b7a7-ab8def2796cd-kube-api-access-b2csw\") on node \"crc\" DevicePath \"\"" Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.784354 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" event={"ID":"9ddcaece-8907-4f18-b7a7-ab8def2796cd","Type":"ContainerDied","Data":"48edec8e085ca3d7416e14c8d2cb3ccc78127c6c4dde5355ee72ee3c9192a669"} Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.784413 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48edec8e085ca3d7416e14c8d2cb3ccc78127c6c4dde5355ee72ee3c9192a669" Mar 08 00:52:04 crc kubenswrapper[4762]: I0308 00:52:04.784498 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548852-x2bd8" Mar 08 00:52:05 crc kubenswrapper[4762]: I0308 00:52:05.279715 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548846-9shbs"] Mar 08 00:52:05 crc kubenswrapper[4762]: I0308 00:52:05.291474 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548846-9shbs"] Mar 08 00:52:07 crc kubenswrapper[4762]: I0308 00:52:07.286795 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6708e2b7-0087-40fd-947c-b5c7adb5dcd9" path="/var/lib/kubelet/pods/6708e2b7-0087-40fd-947c-b5c7adb5dcd9/volumes" Mar 08 00:52:12 crc kubenswrapper[4762]: I0308 00:52:12.852203 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:52:12 crc kubenswrapper[4762]: I0308 00:52:12.852668 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:52:24 crc kubenswrapper[4762]: I0308 00:52:24.693030 4762 scope.go:117] "RemoveContainer" containerID="0d05e5e31286984017d8bc0d9aab4776a20bd7df358efb1107f01241ef61d309" Mar 08 00:52:24 crc kubenswrapper[4762]: I0308 00:52:24.739837 4762 scope.go:117] "RemoveContainer" containerID="1818ba3c6001fd50ef6679846e58d75a21828782350520c3d35f275564612aad" Mar 08 00:52:24 crc kubenswrapper[4762]: I0308 00:52:24.848169 4762 scope.go:117] "RemoveContainer" containerID="abc744f32d09c41f0a4ab31e6f0d84ee599b7d238b9a3ad67df75e39dfbfee5c" Mar 08 00:52:24 crc kubenswrapper[4762]: I0308 00:52:24.887868 4762 scope.go:117] "RemoveContainer" containerID="0a105b6f068febd894a6f32603b932912433f734261860740743fce511f2f984" Mar 08 00:52:42 crc kubenswrapper[4762]: I0308 00:52:42.851171 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 00:52:42 crc kubenswrapper[4762]: I0308 00:52:42.851823 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 00:52:42 crc kubenswrapper[4762]: I0308 00:52:42.851874 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 00:52:42 crc kubenswrapper[4762]: I0308 00:52:42.852854 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 00:52:42 crc kubenswrapper[4762]: I0308 00:52:42.852916 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" gracePeriod=600 Mar 08 00:52:42 crc kubenswrapper[4762]: E0308 00:52:42.980600 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:52:43 crc kubenswrapper[4762]: I0308 00:52:43.347882 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" exitCode=0 Mar 08 00:52:43 crc kubenswrapper[4762]: I0308 00:52:43.348002 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac"} Mar 08 00:52:43 crc kubenswrapper[4762]: I0308 00:52:43.348205 4762 scope.go:117] "RemoveContainer" containerID="34431af11881eaae8980f4fa624e154f145ace7580df4ac523d50069777cde15" Mar 08 00:52:43 crc kubenswrapper[4762]: I0308 00:52:43.349204 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:52:43 crc kubenswrapper[4762]: E0308 00:52:43.349853 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:52:54 crc kubenswrapper[4762]: I0308 00:52:54.263689 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:52:54 crc kubenswrapper[4762]: E0308 00:52:54.264601 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:53:08 crc kubenswrapper[4762]: I0308 00:53:08.264671 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:53:08 crc kubenswrapper[4762]: E0308 00:53:08.266238 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:53:20 crc kubenswrapper[4762]: I0308 00:53:20.264165 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:53:20 crc kubenswrapper[4762]: E0308 00:53:20.265368 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:53:25 crc kubenswrapper[4762]: I0308 00:53:25.031546 4762 scope.go:117] "RemoveContainer" containerID="91905d79615d3e9a43d074de4ee94acfa48c2b92369065bbfd487a625ceedb36" Mar 08 00:53:25 crc kubenswrapper[4762]: I0308 00:53:25.069348 4762 scope.go:117] "RemoveContainer" containerID="a26bdf0a7ea35fb84ed504a559d33acfd3a4d73ae04389e6e1c581648ae1d2b8" Mar 08 00:53:25 crc kubenswrapper[4762]: I0308 00:53:25.108488 4762 scope.go:117] "RemoveContainer" containerID="c3cafd82f77a0db7e0a3fe9fc65b9ef96d9d186b354d145198ab88a96bbe1ec7" Mar 08 00:53:25 crc kubenswrapper[4762]: I0308 00:53:25.134163 4762 scope.go:117] "RemoveContainer" containerID="973fbd398fb56eb76a78c13f247426ba7cba02ba71382588679df59c2bfda3ea" Mar 08 00:53:28 crc kubenswrapper[4762]: I0308 00:53:28.922416 4762 generic.go:334] "Generic (PLEG): container finished" podID="d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" containerID="9f20ee537496d518f89b00f6e185be15e6da16a3af71dfefefc8aef0a5315496" exitCode=0 Mar 08 00:53:28 crc kubenswrapper[4762]: I0308 00:53:28.922538 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" event={"ID":"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95","Type":"ContainerDied","Data":"9f20ee537496d518f89b00f6e185be15e6da16a3af71dfefefc8aef0a5315496"} Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.442616 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.638529 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam\") pod \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.638574 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory\") pod \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.638659 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h54j\" (UniqueName: \"kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j\") pod \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.638735 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle\") pod \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\" (UID: \"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95\") " Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.643535 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" (UID: "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.643613 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j" (OuterVolumeSpecName: "kube-api-access-5h54j") pod "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" (UID: "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95"). InnerVolumeSpecName "kube-api-access-5h54j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.672440 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" (UID: "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.678308 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory" (OuterVolumeSpecName: "inventory") pod "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" (UID: "d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.742170 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.742244 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.742269 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h54j\" (UniqueName: \"kubernetes.io/projected/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-kube-api-access-5h54j\") on node \"crc\" DevicePath \"\"" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.742293 4762 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.948609 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" event={"ID":"d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95","Type":"ContainerDied","Data":"aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79"} Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.948669 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa1fa59b02e32f26258a1565ebf067249e0d3ec4755627842b73b6bb9a261f79" Mar 08 00:53:30 crc kubenswrapper[4762]: I0308 00:53:30.948724 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.063222 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb"] Mar 08 00:53:31 crc kubenswrapper[4762]: E0308 00:53:31.064000 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.064032 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 00:53:31 crc kubenswrapper[4762]: E0308 00:53:31.064067 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ddcaece-8907-4f18-b7a7-ab8def2796cd" containerName="oc" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.064080 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ddcaece-8907-4f18-b7a7-ab8def2796cd" containerName="oc" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.064493 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ddcaece-8907-4f18-b7a7-ab8def2796cd" containerName="oc" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.064544 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.065903 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.076913 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb"] Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.119914 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.119994 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.120343 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.120539 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.149518 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf625\" (UniqueName: \"kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.149685 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.149740 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.252203 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.252337 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.252551 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf625\" (UniqueName: \"kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.256110 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.257547 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.274394 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf625\" (UniqueName: \"kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-84jzb\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.437866 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.983092 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb"] Mar 08 00:53:31 crc kubenswrapper[4762]: I0308 00:53:31.989495 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:53:32 crc kubenswrapper[4762]: I0308 00:53:32.263697 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:53:32 crc kubenswrapper[4762]: E0308 00:53:32.264292 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:53:32 crc kubenswrapper[4762]: I0308 00:53:32.979412 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" event={"ID":"c58b70cc-254f-4a6f-9acc-df7b1852f7d6","Type":"ContainerStarted","Data":"dc4549bde0c76d0e63be61eb119d11e8c13a31ce950e6b9e5bfc2989e06fe218"} Mar 08 00:53:32 crc kubenswrapper[4762]: I0308 00:53:32.979835 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" event={"ID":"c58b70cc-254f-4a6f-9acc-df7b1852f7d6","Type":"ContainerStarted","Data":"cfef21cb0375ffe073fee862c51f8c448465b465b164ee95019c7f18cf33763f"} Mar 08 00:53:33 crc kubenswrapper[4762]: I0308 00:53:33.013738 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" podStartSLOduration=1.516616527 podStartE2EDuration="2.013708494s" podCreationTimestamp="2026-03-08 00:53:31 +0000 UTC" firstStartedPulling="2026-03-08 00:53:31.989211981 +0000 UTC m=+1833.463356335" lastFinishedPulling="2026-03-08 00:53:32.486303918 +0000 UTC m=+1833.960448302" observedRunningTime="2026-03-08 00:53:32.999194989 +0000 UTC m=+1834.473339363" watchObservedRunningTime="2026-03-08 00:53:33.013708494 +0000 UTC m=+1834.487852868" Mar 08 00:53:44 crc kubenswrapper[4762]: I0308 00:53:44.263856 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:53:44 crc kubenswrapper[4762]: E0308 00:53:44.264985 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:53:58 crc kubenswrapper[4762]: I0308 00:53:58.263662 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:53:58 crc kubenswrapper[4762]: E0308 00:53:58.264496 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.156539 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548854-g5tgg"] Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.158654 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.174986 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548854-g5tgg"] Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.205711 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.205975 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.206137 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.252301 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-526ks\" (UniqueName: \"kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks\") pod \"auto-csr-approver-29548854-g5tgg\" (UID: \"05c005e7-2d25-403a-a39b-d4833e076719\") " pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.354308 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-526ks\" (UniqueName: \"kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks\") pod \"auto-csr-approver-29548854-g5tgg\" (UID: \"05c005e7-2d25-403a-a39b-d4833e076719\") " pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.371603 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-526ks\" (UniqueName: \"kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks\") pod \"auto-csr-approver-29548854-g5tgg\" (UID: \"05c005e7-2d25-403a-a39b-d4833e076719\") " pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:00 crc kubenswrapper[4762]: I0308 00:54:00.524283 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:01 crc kubenswrapper[4762]: I0308 00:54:01.011696 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548854-g5tgg"] Mar 08 00:54:01 crc kubenswrapper[4762]: I0308 00:54:01.344994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" event={"ID":"05c005e7-2d25-403a-a39b-d4833e076719","Type":"ContainerStarted","Data":"17624c760ba6d7fd8ef10403bc9eb6b0363c55b3ca37e36bbe2b78242ae1ec00"} Mar 08 00:54:02 crc kubenswrapper[4762]: I0308 00:54:02.356102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" event={"ID":"05c005e7-2d25-403a-a39b-d4833e076719","Type":"ContainerStarted","Data":"e12aae2ce657f7444a3694003a2e9b2d3a37179e3389a98f3c2af4ac99ce1730"} Mar 08 00:54:02 crc kubenswrapper[4762]: I0308 00:54:02.383624 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" podStartSLOduration=1.393894452 podStartE2EDuration="2.383605216s" podCreationTimestamp="2026-03-08 00:54:00 +0000 UTC" firstStartedPulling="2026-03-08 00:54:01.016295272 +0000 UTC m=+1862.490439606" lastFinishedPulling="2026-03-08 00:54:02.006006016 +0000 UTC m=+1863.480150370" observedRunningTime="2026-03-08 00:54:02.376433995 +0000 UTC m=+1863.850578349" watchObservedRunningTime="2026-03-08 00:54:02.383605216 +0000 UTC m=+1863.857749560" Mar 08 00:54:03 crc kubenswrapper[4762]: I0308 00:54:03.367139 4762 generic.go:334] "Generic (PLEG): container finished" podID="05c005e7-2d25-403a-a39b-d4833e076719" containerID="e12aae2ce657f7444a3694003a2e9b2d3a37179e3389a98f3c2af4ac99ce1730" exitCode=0 Mar 08 00:54:03 crc kubenswrapper[4762]: I0308 00:54:03.367362 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" event={"ID":"05c005e7-2d25-403a-a39b-d4833e076719","Type":"ContainerDied","Data":"e12aae2ce657f7444a3694003a2e9b2d3a37179e3389a98f3c2af4ac99ce1730"} Mar 08 00:54:04 crc kubenswrapper[4762]: I0308 00:54:04.754801 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:04 crc kubenswrapper[4762]: I0308 00:54:04.849055 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-526ks\" (UniqueName: \"kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks\") pod \"05c005e7-2d25-403a-a39b-d4833e076719\" (UID: \"05c005e7-2d25-403a-a39b-d4833e076719\") " Mar 08 00:54:04 crc kubenswrapper[4762]: I0308 00:54:04.855574 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks" (OuterVolumeSpecName: "kube-api-access-526ks") pod "05c005e7-2d25-403a-a39b-d4833e076719" (UID: "05c005e7-2d25-403a-a39b-d4833e076719"). InnerVolumeSpecName "kube-api-access-526ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:54:04 crc kubenswrapper[4762]: I0308 00:54:04.951436 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-526ks\" (UniqueName: \"kubernetes.io/projected/05c005e7-2d25-403a-a39b-d4833e076719-kube-api-access-526ks\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:05 crc kubenswrapper[4762]: I0308 00:54:05.394469 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" event={"ID":"05c005e7-2d25-403a-a39b-d4833e076719","Type":"ContainerDied","Data":"17624c760ba6d7fd8ef10403bc9eb6b0363c55b3ca37e36bbe2b78242ae1ec00"} Mar 08 00:54:05 crc kubenswrapper[4762]: I0308 00:54:05.394521 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17624c760ba6d7fd8ef10403bc9eb6b0363c55b3ca37e36bbe2b78242ae1ec00" Mar 08 00:54:05 crc kubenswrapper[4762]: I0308 00:54:05.394595 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548854-g5tgg" Mar 08 00:54:05 crc kubenswrapper[4762]: I0308 00:54:05.470146 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548848-5qzjs"] Mar 08 00:54:05 crc kubenswrapper[4762]: I0308 00:54:05.483349 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548848-5qzjs"] Mar 08 00:54:07 crc kubenswrapper[4762]: I0308 00:54:07.283384 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d06066b-71da-4572-86bd-d6958ec35438" path="/var/lib/kubelet/pods/2d06066b-71da-4572-86bd-d6958ec35438/volumes" Mar 08 00:54:08 crc kubenswrapper[4762]: I0308 00:54:08.056021 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-65c5-account-create-update-b49fz"] Mar 08 00:54:08 crc kubenswrapper[4762]: I0308 00:54:08.072850 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jq6nb"] Mar 08 00:54:08 crc kubenswrapper[4762]: I0308 00:54:08.087931 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jq6nb"] Mar 08 00:54:08 crc kubenswrapper[4762]: I0308 00:54:08.099974 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-65c5-account-create-update-b49fz"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.046321 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-jvbzs"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.062458 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-a691-account-create-update-v7fhx"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.081425 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5282s"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.092793 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-9d8d-account-create-update-q8xp9"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.106155 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5tsjp"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.118269 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-jvbzs"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.143267 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5282s"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.143545 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-a691-account-create-update-v7fhx"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.143643 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-9d8d-account-create-update-q8xp9"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.181000 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5tsjp"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.190914 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-991a-account-create-update-xvbkq"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.202044 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-991a-account-create-update-xvbkq"] Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.277505 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b804311-8ca9-4928-b16c-626fb3fc2db1" path="/var/lib/kubelet/pods/2b804311-8ca9-4928-b16c-626fb3fc2db1/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.278231 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c9a41e2-e552-41bb-bf6f-393ab1186f83" path="/var/lib/kubelet/pods/2c9a41e2-e552-41bb-bf6f-393ab1186f83/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.278899 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aefe657-7268-4c06-be92-61d570355268" path="/var/lib/kubelet/pods/5aefe657-7268-4c06-be92-61d570355268/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.279526 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af6797d1-1e15-4695-8b5f-fc508fbc3dfb" path="/var/lib/kubelet/pods/af6797d1-1e15-4695-8b5f-fc508fbc3dfb/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.280677 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a" path="/var/lib/kubelet/pods/bda9a0e2-e0aa-4a4c-82b0-65fc631f6a5a/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.281202 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c380979a-505d-4323-b423-54db896cbd32" path="/var/lib/kubelet/pods/c380979a-505d-4323-b423-54db896cbd32/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.281815 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6374dea-1aa3-48b3-811e-31eb24e6c789" path="/var/lib/kubelet/pods/e6374dea-1aa3-48b3-811e-31eb24e6c789/volumes" Mar 08 00:54:09 crc kubenswrapper[4762]: I0308 00:54:09.283222 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c47864-f560-4bbd-81f2-a7b25e917468" path="/var/lib/kubelet/pods/f4c47864-f560-4bbd-81f2-a7b25e917468/volumes" Mar 08 00:54:10 crc kubenswrapper[4762]: I0308 00:54:10.264178 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:54:10 crc kubenswrapper[4762]: E0308 00:54:10.264489 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:54:17 crc kubenswrapper[4762]: I0308 00:54:17.046496 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w"] Mar 08 00:54:17 crc kubenswrapper[4762]: I0308 00:54:17.069238 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-zlj5w"] Mar 08 00:54:17 crc kubenswrapper[4762]: I0308 00:54:17.276115 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="520a38bb-14f1-4be1-a039-520016f372e0" path="/var/lib/kubelet/pods/520a38bb-14f1-4be1-a039-520016f372e0/volumes" Mar 08 00:54:18 crc kubenswrapper[4762]: I0308 00:54:18.051184 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-4077-account-create-update-xm99q"] Mar 08 00:54:18 crc kubenswrapper[4762]: I0308 00:54:18.063687 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-4077-account-create-update-xm99q"] Mar 08 00:54:19 crc kubenswrapper[4762]: I0308 00:54:19.278053 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8" path="/var/lib/kubelet/pods/eb5b05e7-09ba-4fae-914f-bed7c6ee5bd8/volumes" Mar 08 00:54:21 crc kubenswrapper[4762]: I0308 00:54:21.264050 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:54:21 crc kubenswrapper[4762]: E0308 00:54:21.265197 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.237448 4762 scope.go:117] "RemoveContainer" containerID="e6813afa2ee807b691ba3cec79736918a08d042c84b845867f9defdc0025ca78" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.266412 4762 scope.go:117] "RemoveContainer" containerID="8a3b965d36a6e5c64f0afc3a25ca694352fc43ca9c7cdc35e2ee6399ce87326a" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.364967 4762 scope.go:117] "RemoveContainer" containerID="37b82cc5a20bab4bdd0b2cab6aeb36bfb6dcec76124ac527bb1ba08076462dec" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.416037 4762 scope.go:117] "RemoveContainer" containerID="1ef85f81ac3661518882b18eb76ac65f6a36cb49dc2f37fce44713f7db2f67af" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.482856 4762 scope.go:117] "RemoveContainer" containerID="398279af41c572272122adc5f50a56fe758dfc91325e94bb9c63cc3ceda81b6d" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.508939 4762 scope.go:117] "RemoveContainer" containerID="1f715a019ce50952bb2791784585d9b0c888a620214525c6ba9e7ccb3f274f8c" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.588032 4762 scope.go:117] "RemoveContainer" containerID="933f19f7cf71b4c43a1d088bec5acd131c9ff5551db9087ee577867812dba2a5" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.621634 4762 scope.go:117] "RemoveContainer" containerID="fc70616047e64037618c8c631e50d3f2a648acf6d68f5ec4b5a1b3665eaa4418" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.672752 4762 scope.go:117] "RemoveContainer" containerID="2ad145ab5f5be59731a223760e0a734bfaf8e75a759e243e8d17d9723503be74" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.699621 4762 scope.go:117] "RemoveContainer" containerID="9c5b5bc6140bec8b935b6724af712306b9e97c1dcd59e4594f4fcc91bbfd18dd" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.722546 4762 scope.go:117] "RemoveContainer" containerID="c69a57ed62af5f482936ad5b8089f8cfbbab35864c53b4484ecb53831e3025df" Mar 08 00:54:25 crc kubenswrapper[4762]: I0308 00:54:25.748346 4762 scope.go:117] "RemoveContainer" containerID="75e99d917b83fb21cef835246d925843fcbed0e596de6a855c7ce486870551c6" Mar 08 00:54:35 crc kubenswrapper[4762]: I0308 00:54:35.042304 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-f242d"] Mar 08 00:54:35 crc kubenswrapper[4762]: I0308 00:54:35.055187 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-f242d"] Mar 08 00:54:35 crc kubenswrapper[4762]: I0308 00:54:35.264138 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:54:35 crc kubenswrapper[4762]: E0308 00:54:35.264611 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:54:35 crc kubenswrapper[4762]: I0308 00:54:35.284146 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da255bbb-75db-4a07-8547-2bf0794edd04" path="/var/lib/kubelet/pods/da255bbb-75db-4a07-8547-2bf0794edd04/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.051437 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mhxgz"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.072873 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-c616-account-create-update-tvdx2"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.091923 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mhxgz"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.106380 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fd64-account-create-update-dlhcc"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.116431 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-82a0-account-create-update-c5g4h"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.128152 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-c616-account-create-update-tvdx2"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.137178 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fd64-account-create-update-dlhcc"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.146561 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-82a0-account-create-update-c5g4h"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.155484 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-76c6s"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.164488 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4682-account-create-update-fmc2n"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.173499 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4682-account-create-update-fmc2n"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.181648 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-pc8mm"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.189143 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-pc8mm"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.196786 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-76c6s"] Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.279615 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2595315c-6bb3-4ac1-a860-004cf18c89af" path="/var/lib/kubelet/pods/2595315c-6bb3-4ac1-a860-004cf18c89af/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.281709 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2844b501-49b8-4b08-adbe-30159ca77f47" path="/var/lib/kubelet/pods/2844b501-49b8-4b08-adbe-30159ca77f47/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.283287 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b" path="/var/lib/kubelet/pods/2ed7f3d7-18e6-46e7-979f-c1dfe8eb489b/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.284719 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ac7177-d14a-4b66-bab1-d8de8b6d8bdb" path="/var/lib/kubelet/pods/55ac7177-d14a-4b66-bab1-d8de8b6d8bdb/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.287352 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae10ab24-1407-4b27-97ab-3424e4b85a03" path="/var/lib/kubelet/pods/ae10ab24-1407-4b27-97ab-3424e4b85a03/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.288812 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11fd63c-9a18-4c14-a7fc-68bca559ce0f" path="/var/lib/kubelet/pods/d11fd63c-9a18-4c14-a7fc-68bca559ce0f/volumes" Mar 08 00:54:39 crc kubenswrapper[4762]: I0308 00:54:39.290261 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1da1613-f9de-4860-a63a-1ecd85e8f340" path="/var/lib/kubelet/pods/e1da1613-f9de-4860-a63a-1ecd85e8f340/volumes" Mar 08 00:54:40 crc kubenswrapper[4762]: I0308 00:54:40.866521 4762 generic.go:334] "Generic (PLEG): container finished" podID="c58b70cc-254f-4a6f-9acc-df7b1852f7d6" containerID="dc4549bde0c76d0e63be61eb119d11e8c13a31ce950e6b9e5bfc2989e06fe218" exitCode=0 Mar 08 00:54:40 crc kubenswrapper[4762]: I0308 00:54:40.866570 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" event={"ID":"c58b70cc-254f-4a6f-9acc-df7b1852f7d6","Type":"ContainerDied","Data":"dc4549bde0c76d0e63be61eb119d11e8c13a31ce950e6b9e5bfc2989e06fe218"} Mar 08 00:54:41 crc kubenswrapper[4762]: I0308 00:54:41.030722 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5mfsp"] Mar 08 00:54:41 crc kubenswrapper[4762]: I0308 00:54:41.045820 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5mfsp"] Mar 08 00:54:41 crc kubenswrapper[4762]: I0308 00:54:41.274008 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff70ef09-d101-4c3f-8a03-95b5fbe0b250" path="/var/lib/kubelet/pods/ff70ef09-d101-4c3f-8a03-95b5fbe0b250/volumes" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.438410 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.561352 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam\") pod \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.561454 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory\") pod \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.561702 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf625\" (UniqueName: \"kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625\") pod \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\" (UID: \"c58b70cc-254f-4a6f-9acc-df7b1852f7d6\") " Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.576369 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625" (OuterVolumeSpecName: "kube-api-access-rf625") pod "c58b70cc-254f-4a6f-9acc-df7b1852f7d6" (UID: "c58b70cc-254f-4a6f-9acc-df7b1852f7d6"). InnerVolumeSpecName "kube-api-access-rf625". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.614729 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory" (OuterVolumeSpecName: "inventory") pod "c58b70cc-254f-4a6f-9acc-df7b1852f7d6" (UID: "c58b70cc-254f-4a6f-9acc-df7b1852f7d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.628892 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c58b70cc-254f-4a6f-9acc-df7b1852f7d6" (UID: "c58b70cc-254f-4a6f-9acc-df7b1852f7d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.664449 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf625\" (UniqueName: \"kubernetes.io/projected/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-kube-api-access-rf625\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.664536 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.664556 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c58b70cc-254f-4a6f-9acc-df7b1852f7d6-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.892897 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" event={"ID":"c58b70cc-254f-4a6f-9acc-df7b1852f7d6","Type":"ContainerDied","Data":"cfef21cb0375ffe073fee862c51f8c448465b465b164ee95019c7f18cf33763f"} Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.892939 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfef21cb0375ffe073fee862c51f8c448465b465b164ee95019c7f18cf33763f" Mar 08 00:54:42 crc kubenswrapper[4762]: I0308 00:54:42.893003 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.018338 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6"] Mar 08 00:54:43 crc kubenswrapper[4762]: E0308 00:54:43.020038 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c58b70cc-254f-4a6f-9acc-df7b1852f7d6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.020173 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c58b70cc-254f-4a6f-9acc-df7b1852f7d6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:43 crc kubenswrapper[4762]: E0308 00:54:43.020314 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05c005e7-2d25-403a-a39b-d4833e076719" containerName="oc" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.020403 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="05c005e7-2d25-403a-a39b-d4833e076719" containerName="oc" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.020829 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="05c005e7-2d25-403a-a39b-d4833e076719" containerName="oc" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.020968 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c58b70cc-254f-4a6f-9acc-df7b1852f7d6" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.022260 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.029935 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.032602 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.032901 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.033596 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.064782 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6"] Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.073101 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.073157 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-8nwnr"] Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.073247 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.073434 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbpg\" (UniqueName: \"kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.084736 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-8nwnr"] Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.175727 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.176146 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbpg\" (UniqueName: \"kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.176188 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.179348 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.180239 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.190999 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbpg\" (UniqueName: \"kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.275276 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bfc6a86-5a16-4814-9de9-f8cf060a966f" path="/var/lib/kubelet/pods/9bfc6a86-5a16-4814-9de9-f8cf060a966f/volumes" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.359124 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:43 crc kubenswrapper[4762]: I0308 00:54:43.976780 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6"] Mar 08 00:54:44 crc kubenswrapper[4762]: I0308 00:54:44.918789 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" event={"ID":"0bb7150b-e4a5-435a-a306-e82bd036f781","Type":"ContainerStarted","Data":"50868e2a127a5f0f72b44a3e3c68c266f66285e9ded8b6455269ecae1896ad9d"} Mar 08 00:54:44 crc kubenswrapper[4762]: I0308 00:54:44.919224 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" event={"ID":"0bb7150b-e4a5-435a-a306-e82bd036f781","Type":"ContainerStarted","Data":"9e7f37d16db4a31ebe07b36c75c4a23548edbf05a6eb152f2f4a0cd572612854"} Mar 08 00:54:44 crc kubenswrapper[4762]: I0308 00:54:44.958004 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" podStartSLOduration=2.461778672 podStartE2EDuration="2.95797559s" podCreationTimestamp="2026-03-08 00:54:42 +0000 UTC" firstStartedPulling="2026-03-08 00:54:43.980120659 +0000 UTC m=+1905.454265023" lastFinishedPulling="2026-03-08 00:54:44.476317567 +0000 UTC m=+1905.950461941" observedRunningTime="2026-03-08 00:54:44.943047651 +0000 UTC m=+1906.417192045" watchObservedRunningTime="2026-03-08 00:54:44.95797559 +0000 UTC m=+1906.432119974" Mar 08 00:54:45 crc kubenswrapper[4762]: I0308 00:54:45.044909 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4jv7v"] Mar 08 00:54:45 crc kubenswrapper[4762]: I0308 00:54:45.060600 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4jv7v"] Mar 08 00:54:45 crc kubenswrapper[4762]: I0308 00:54:45.288878 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d92be0-6a23-4bc5-93b5-342f087356be" path="/var/lib/kubelet/pods/69d92be0-6a23-4bc5-93b5-342f087356be/volumes" Mar 08 00:54:48 crc kubenswrapper[4762]: I0308 00:54:48.265049 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:54:48 crc kubenswrapper[4762]: E0308 00:54:48.266064 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:54:49 crc kubenswrapper[4762]: I0308 00:54:49.983837 4762 generic.go:334] "Generic (PLEG): container finished" podID="0bb7150b-e4a5-435a-a306-e82bd036f781" containerID="50868e2a127a5f0f72b44a3e3c68c266f66285e9ded8b6455269ecae1896ad9d" exitCode=0 Mar 08 00:54:49 crc kubenswrapper[4762]: I0308 00:54:49.983938 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" event={"ID":"0bb7150b-e4a5-435a-a306-e82bd036f781","Type":"ContainerDied","Data":"50868e2a127a5f0f72b44a3e3c68c266f66285e9ded8b6455269ecae1896ad9d"} Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.513197 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.611898 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam\") pod \"0bb7150b-e4a5-435a-a306-e82bd036f781\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.612128 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbpg\" (UniqueName: \"kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg\") pod \"0bb7150b-e4a5-435a-a306-e82bd036f781\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.612174 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory\") pod \"0bb7150b-e4a5-435a-a306-e82bd036f781\" (UID: \"0bb7150b-e4a5-435a-a306-e82bd036f781\") " Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.619965 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg" (OuterVolumeSpecName: "kube-api-access-rxbpg") pod "0bb7150b-e4a5-435a-a306-e82bd036f781" (UID: "0bb7150b-e4a5-435a-a306-e82bd036f781"). InnerVolumeSpecName "kube-api-access-rxbpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.656616 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory" (OuterVolumeSpecName: "inventory") pod "0bb7150b-e4a5-435a-a306-e82bd036f781" (UID: "0bb7150b-e4a5-435a-a306-e82bd036f781"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.669026 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0bb7150b-e4a5-435a-a306-e82bd036f781" (UID: "0bb7150b-e4a5-435a-a306-e82bd036f781"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.715650 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbpg\" (UniqueName: \"kubernetes.io/projected/0bb7150b-e4a5-435a-a306-e82bd036f781-kube-api-access-rxbpg\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.715696 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:51 crc kubenswrapper[4762]: I0308 00:54:51.715712 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0bb7150b-e4a5-435a-a306-e82bd036f781-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.012166 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" event={"ID":"0bb7150b-e4a5-435a-a306-e82bd036f781","Type":"ContainerDied","Data":"9e7f37d16db4a31ebe07b36c75c4a23548edbf05a6eb152f2f4a0cd572612854"} Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.012314 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e7f37d16db4a31ebe07b36c75c4a23548edbf05a6eb152f2f4a0cd572612854" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.012260 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.143889 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j"] Mar 08 00:54:52 crc kubenswrapper[4762]: E0308 00:54:52.144620 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bb7150b-e4a5-435a-a306-e82bd036f781" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.144654 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bb7150b-e4a5-435a-a306-e82bd036f781" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.145130 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bb7150b-e4a5-435a-a306-e82bd036f781" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.146392 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.149618 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.150016 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.150315 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.157060 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j"] Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.157263 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.228541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjll6\" (UniqueName: \"kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.228644 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.228781 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.330977 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjll6\" (UniqueName: \"kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.331343 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.331402 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.337969 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.338058 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.350237 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjll6\" (UniqueName: \"kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-hsj4j\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:52 crc kubenswrapper[4762]: I0308 00:54:52.495489 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:54:53 crc kubenswrapper[4762]: I0308 00:54:53.140863 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j"] Mar 08 00:54:54 crc kubenswrapper[4762]: I0308 00:54:54.036551 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" event={"ID":"4dca332a-7c7d-448c-b866-727fb88ea870","Type":"ContainerStarted","Data":"e4c8cacf8e690119b0a31da33eaf213b8270968ee8ea58f0d2e10feb8e82dea6"} Mar 08 00:54:54 crc kubenswrapper[4762]: I0308 00:54:54.036625 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" event={"ID":"4dca332a-7c7d-448c-b866-727fb88ea870","Type":"ContainerStarted","Data":"fdfeedb6e64fd011fb8ed3186b6298e61680e3ececf698b1a94fe3d140170e2f"} Mar 08 00:54:54 crc kubenswrapper[4762]: I0308 00:54:54.056184 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" podStartSLOduration=1.6068661400000002 podStartE2EDuration="2.056159698s" podCreationTimestamp="2026-03-08 00:54:52 +0000 UTC" firstStartedPulling="2026-03-08 00:54:53.153339511 +0000 UTC m=+1914.627483885" lastFinishedPulling="2026-03-08 00:54:53.602633089 +0000 UTC m=+1915.076777443" observedRunningTime="2026-03-08 00:54:54.053397364 +0000 UTC m=+1915.527541728" watchObservedRunningTime="2026-03-08 00:54:54.056159698 +0000 UTC m=+1915.530304062" Mar 08 00:55:02 crc kubenswrapper[4762]: I0308 00:55:02.263754 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:55:02 crc kubenswrapper[4762]: E0308 00:55:02.264963 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:55:14 crc kubenswrapper[4762]: I0308 00:55:14.043287 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-p8wzm"] Mar 08 00:55:14 crc kubenswrapper[4762]: I0308 00:55:14.051696 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-p8wzm"] Mar 08 00:55:15 crc kubenswrapper[4762]: I0308 00:55:15.277256 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997" path="/var/lib/kubelet/pods/a6aaf18d-2d50-42ed-8bbb-2f2c4adc5997/volumes" Mar 08 00:55:17 crc kubenswrapper[4762]: I0308 00:55:17.264548 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:55:17 crc kubenswrapper[4762]: E0308 00:55:17.265501 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.042656 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-p9m92"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.068029 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hjftj"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.083203 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-w6rs6"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.095643 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-p9m92"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.107922 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hjftj"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.118559 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-w6rs6"] Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.287941 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f60271a-1333-4e8b-9a9d-1be9697bbfb0" path="/var/lib/kubelet/pods/1f60271a-1333-4e8b-9a9d-1be9697bbfb0/volumes" Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.288793 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2613b509-c9d0-4a4b-99c0-11c8c9a0e891" path="/var/lib/kubelet/pods/2613b509-c9d0-4a4b-99c0-11c8c9a0e891/volumes" Mar 08 00:55:21 crc kubenswrapper[4762]: I0308 00:55:21.289593 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6898c30b-2e0c-4062-b5f2-70aa22bb5139" path="/var/lib/kubelet/pods/6898c30b-2e0c-4062-b5f2-70aa22bb5139/volumes" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.056217 4762 scope.go:117] "RemoveContainer" containerID="99f1d044c6a46129a53cc10de3a238248dd42ba5b8b5e6f383e41e657e6459ee" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.090002 4762 scope.go:117] "RemoveContainer" containerID="209e3cbf24f25a04b2128c2430344d8c2f085cc3a3b3f5b99eeb6cf7efa8e7c0" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.173871 4762 scope.go:117] "RemoveContainer" containerID="f7ab78916310999b3c6aef47781df670639bd1cab72100e354bd6f7c091bd184" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.204140 4762 scope.go:117] "RemoveContainer" containerID="2373b2ef49cbd140ca57248d520cca16a0957a5e4c15ca55d5ae8afc76069539" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.260284 4762 scope.go:117] "RemoveContainer" containerID="131af0bb5a04281b09939b68e91fbaedcdc8412955b11584f37b21d49d4b280b" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.312719 4762 scope.go:117] "RemoveContainer" containerID="921de3fc2026836fc040da75a115f70f29584eeb9e193516fd272a984941f492" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.350374 4762 scope.go:117] "RemoveContainer" containerID="cd1aaa171555907cb6e1ce1f31f58f8f7ee8e73b81b246ca49a3a38eafb237de" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.371099 4762 scope.go:117] "RemoveContainer" containerID="17036cd87a89710f741f57d9c87eb414a74f9dcd46354fcc280150141a4acc8a" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.403967 4762 scope.go:117] "RemoveContainer" containerID="24f10e01b76751b32e70cbf746c17f0d5d1367933b11187b517e9aae78b67857" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.439367 4762 scope.go:117] "RemoveContainer" containerID="933f6a5aee8c739dc9e13c926930605089d71d02b783a6cf738a30148c7cc5c4" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.471591 4762 scope.go:117] "RemoveContainer" containerID="496ecf81e15894d396c24c98cb6dfb74c150d9b266f9b63618cddcd03fb9aecb" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.509032 4762 scope.go:117] "RemoveContainer" containerID="0084fc87485ef56b35c25c5e219bf69a2f76206119c26bdde22b98d5c688fb9d" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.545858 4762 scope.go:117] "RemoveContainer" containerID="ba69203ae73ee6dc75bd92fac80922f446ecb8c2b18a378f6e08a85950a61b04" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.578811 4762 scope.go:117] "RemoveContainer" containerID="0e53e667046654eb7478c11e56e61886be357d817704fbeff86e04204e225e55" Mar 08 00:55:26 crc kubenswrapper[4762]: I0308 00:55:26.644487 4762 scope.go:117] "RemoveContainer" containerID="ea0db89a0f0bc007473ef74c878a40bf8348bdcc434b2761a93d24ea0be015f6" Mar 08 00:55:28 crc kubenswrapper[4762]: I0308 00:55:28.264121 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:55:28 crc kubenswrapper[4762]: E0308 00:55:28.264614 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:55:32 crc kubenswrapper[4762]: I0308 00:55:32.580808 4762 generic.go:334] "Generic (PLEG): container finished" podID="4dca332a-7c7d-448c-b866-727fb88ea870" containerID="e4c8cacf8e690119b0a31da33eaf213b8270968ee8ea58f0d2e10feb8e82dea6" exitCode=0 Mar 08 00:55:32 crc kubenswrapper[4762]: I0308 00:55:32.580954 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" event={"ID":"4dca332a-7c7d-448c-b866-727fb88ea870","Type":"ContainerDied","Data":"e4c8cacf8e690119b0a31da33eaf213b8270968ee8ea58f0d2e10feb8e82dea6"} Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.153513 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.216656 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam\") pod \"4dca332a-7c7d-448c-b866-727fb88ea870\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.217366 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjll6\" (UniqueName: \"kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6\") pod \"4dca332a-7c7d-448c-b866-727fb88ea870\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.217429 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory\") pod \"4dca332a-7c7d-448c-b866-727fb88ea870\" (UID: \"4dca332a-7c7d-448c-b866-727fb88ea870\") " Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.223607 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6" (OuterVolumeSpecName: "kube-api-access-cjll6") pod "4dca332a-7c7d-448c-b866-727fb88ea870" (UID: "4dca332a-7c7d-448c-b866-727fb88ea870"). InnerVolumeSpecName "kube-api-access-cjll6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.251129 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4dca332a-7c7d-448c-b866-727fb88ea870" (UID: "4dca332a-7c7d-448c-b866-727fb88ea870"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.262267 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory" (OuterVolumeSpecName: "inventory") pod "4dca332a-7c7d-448c-b866-727fb88ea870" (UID: "4dca332a-7c7d-448c-b866-727fb88ea870"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.320734 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.320870 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjll6\" (UniqueName: \"kubernetes.io/projected/4dca332a-7c7d-448c-b866-727fb88ea870-kube-api-access-cjll6\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.320887 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4dca332a-7c7d-448c-b866-727fb88ea870-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.615754 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" event={"ID":"4dca332a-7c7d-448c-b866-727fb88ea870","Type":"ContainerDied","Data":"fdfeedb6e64fd011fb8ed3186b6298e61680e3ececf698b1a94fe3d140170e2f"} Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.615816 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdfeedb6e64fd011fb8ed3186b6298e61680e3ececf698b1a94fe3d140170e2f" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.615855 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.711665 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn"] Mar 08 00:55:34 crc kubenswrapper[4762]: E0308 00:55:34.712161 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dca332a-7c7d-448c-b866-727fb88ea870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.712183 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dca332a-7c7d-448c-b866-727fb88ea870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.712438 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dca332a-7c7d-448c-b866-727fb88ea870" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.713343 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.715360 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.716029 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.716528 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.720281 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.727529 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn"] Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.840410 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gg8c\" (UniqueName: \"kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.840748 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.840961 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.943834 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gg8c\" (UniqueName: \"kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.944038 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.944145 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.948818 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.950637 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:34 crc kubenswrapper[4762]: I0308 00:55:34.959345 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gg8c\" (UniqueName: \"kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:35 crc kubenswrapper[4762]: I0308 00:55:35.043297 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:35 crc kubenswrapper[4762]: I0308 00:55:35.640981 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn"] Mar 08 00:55:36 crc kubenswrapper[4762]: I0308 00:55:36.644251 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" event={"ID":"f7494b8e-e16c-482d-8fc3-59736f59c318","Type":"ContainerStarted","Data":"f0543878de1eca384d6d1d0937d3dd5df14764b02bdf3f2442e53684164312c3"} Mar 08 00:55:36 crc kubenswrapper[4762]: I0308 00:55:36.644662 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" event={"ID":"f7494b8e-e16c-482d-8fc3-59736f59c318","Type":"ContainerStarted","Data":"9105d3aa759078429df3f528b7015f614d8faaac290b791f270a5e6a976dd909"} Mar 08 00:55:36 crc kubenswrapper[4762]: I0308 00:55:36.668873 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" podStartSLOduration=2.206502862 podStartE2EDuration="2.668848792s" podCreationTimestamp="2026-03-08 00:55:34 +0000 UTC" firstStartedPulling="2026-03-08 00:55:35.633355372 +0000 UTC m=+1957.107499746" lastFinishedPulling="2026-03-08 00:55:36.095701292 +0000 UTC m=+1957.569845676" observedRunningTime="2026-03-08 00:55:36.663560399 +0000 UTC m=+1958.137704753" watchObservedRunningTime="2026-03-08 00:55:36.668848792 +0000 UTC m=+1958.142993146" Mar 08 00:55:39 crc kubenswrapper[4762]: I0308 00:55:39.273262 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:55:39 crc kubenswrapper[4762]: E0308 00:55:39.273644 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:55:40 crc kubenswrapper[4762]: I0308 00:55:40.727056 4762 generic.go:334] "Generic (PLEG): container finished" podID="f7494b8e-e16c-482d-8fc3-59736f59c318" containerID="f0543878de1eca384d6d1d0937d3dd5df14764b02bdf3f2442e53684164312c3" exitCode=0 Mar 08 00:55:40 crc kubenswrapper[4762]: I0308 00:55:40.727143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" event={"ID":"f7494b8e-e16c-482d-8fc3-59736f59c318","Type":"ContainerDied","Data":"f0543878de1eca384d6d1d0937d3dd5df14764b02bdf3f2442e53684164312c3"} Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.310140 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.425274 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory\") pod \"f7494b8e-e16c-482d-8fc3-59736f59c318\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.425336 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gg8c\" (UniqueName: \"kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c\") pod \"f7494b8e-e16c-482d-8fc3-59736f59c318\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.425419 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam\") pod \"f7494b8e-e16c-482d-8fc3-59736f59c318\" (UID: \"f7494b8e-e16c-482d-8fc3-59736f59c318\") " Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.435173 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c" (OuterVolumeSpecName: "kube-api-access-9gg8c") pod "f7494b8e-e16c-482d-8fc3-59736f59c318" (UID: "f7494b8e-e16c-482d-8fc3-59736f59c318"). InnerVolumeSpecName "kube-api-access-9gg8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.463869 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f7494b8e-e16c-482d-8fc3-59736f59c318" (UID: "f7494b8e-e16c-482d-8fc3-59736f59c318"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.488061 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory" (OuterVolumeSpecName: "inventory") pod "f7494b8e-e16c-482d-8fc3-59736f59c318" (UID: "f7494b8e-e16c-482d-8fc3-59736f59c318"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.528644 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.528703 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gg8c\" (UniqueName: \"kubernetes.io/projected/f7494b8e-e16c-482d-8fc3-59736f59c318-kube-api-access-9gg8c\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.528725 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f7494b8e-e16c-482d-8fc3-59736f59c318-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.769471 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" event={"ID":"f7494b8e-e16c-482d-8fc3-59736f59c318","Type":"ContainerDied","Data":"9105d3aa759078429df3f528b7015f614d8faaac290b791f270a5e6a976dd909"} Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.769529 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9105d3aa759078429df3f528b7015f614d8faaac290b791f270a5e6a976dd909" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.769615 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.842378 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn"] Mar 08 00:55:42 crc kubenswrapper[4762]: E0308 00:55:42.843100 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7494b8e-e16c-482d-8fc3-59736f59c318" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.843134 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7494b8e-e16c-482d-8fc3-59736f59c318" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.843543 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7494b8e-e16c-482d-8fc3-59736f59c318" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.845005 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.849075 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.849248 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.849498 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.849612 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:55:42 crc kubenswrapper[4762]: I0308 00:55:42.854770 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn"] Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.039628 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.040258 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.040570 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6n88\" (UniqueName: \"kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.142261 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6n88\" (UniqueName: \"kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.142325 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.142430 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.147454 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.147814 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.167672 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6n88\" (UniqueName: \"kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.170855 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:55:43 crc kubenswrapper[4762]: I0308 00:55:43.797905 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn"] Mar 08 00:55:43 crc kubenswrapper[4762]: W0308 00:55:43.802662 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba04a642_51e5_447c_b31c_fa0b5de485f0.slice/crio-6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a WatchSource:0}: Error finding container 6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a: Status 404 returned error can't find the container with id 6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a Mar 08 00:55:44 crc kubenswrapper[4762]: I0308 00:55:44.049137 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-pxw9p"] Mar 08 00:55:44 crc kubenswrapper[4762]: I0308 00:55:44.057780 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-pxw9p"] Mar 08 00:55:44 crc kubenswrapper[4762]: I0308 00:55:44.795992 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" event={"ID":"ba04a642-51e5-447c-b31c-fa0b5de485f0","Type":"ContainerStarted","Data":"5b0d377b56b84720b4914ca6ccfc7d4deb851450328e08083f2a31139a7fe615"} Mar 08 00:55:44 crc kubenswrapper[4762]: I0308 00:55:44.796613 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" event={"ID":"ba04a642-51e5-447c-b31c-fa0b5de485f0","Type":"ContainerStarted","Data":"6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a"} Mar 08 00:55:44 crc kubenswrapper[4762]: I0308 00:55:44.820266 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" podStartSLOduration=2.422517865 podStartE2EDuration="2.820241352s" podCreationTimestamp="2026-03-08 00:55:42 +0000 UTC" firstStartedPulling="2026-03-08 00:55:43.805383326 +0000 UTC m=+1965.279527680" lastFinishedPulling="2026-03-08 00:55:44.203106813 +0000 UTC m=+1965.677251167" observedRunningTime="2026-03-08 00:55:44.814705552 +0000 UTC m=+1966.288849956" watchObservedRunningTime="2026-03-08 00:55:44.820241352 +0000 UTC m=+1966.294385696" Mar 08 00:55:45 crc kubenswrapper[4762]: I0308 00:55:45.295396 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8511806b-d3fb-48df-8348-33f84645e2a3" path="/var/lib/kubelet/pods/8511806b-d3fb-48df-8348-33f84645e2a3/volumes" Mar 08 00:55:51 crc kubenswrapper[4762]: I0308 00:55:51.263653 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:55:51 crc kubenswrapper[4762]: E0308 00:55:51.264444 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.167092 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548856-zpmvc"] Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.169520 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.173411 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.176599 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.179546 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.182303 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548856-zpmvc"] Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.182553 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtx4\" (UniqueName: \"kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4\") pod \"auto-csr-approver-29548856-zpmvc\" (UID: \"74c4248d-35d8-4ea7-9546-7665ea1c9f15\") " pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.284025 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtx4\" (UniqueName: \"kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4\") pod \"auto-csr-approver-29548856-zpmvc\" (UID: \"74c4248d-35d8-4ea7-9546-7665ea1c9f15\") " pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.312035 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtx4\" (UniqueName: \"kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4\") pod \"auto-csr-approver-29548856-zpmvc\" (UID: \"74c4248d-35d8-4ea7-9546-7665ea1c9f15\") " pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:00 crc kubenswrapper[4762]: I0308 00:56:00.493843 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:01 crc kubenswrapper[4762]: I0308 00:56:01.008713 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548856-zpmvc"] Mar 08 00:56:01 crc kubenswrapper[4762]: W0308 00:56:01.012332 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74c4248d_35d8_4ea7_9546_7665ea1c9f15.slice/crio-cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753 WatchSource:0}: Error finding container cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753: Status 404 returned error can't find the container with id cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753 Mar 08 00:56:01 crc kubenswrapper[4762]: I0308 00:56:01.054652 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" event={"ID":"74c4248d-35d8-4ea7-9546-7665ea1c9f15","Type":"ContainerStarted","Data":"cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753"} Mar 08 00:56:02 crc kubenswrapper[4762]: I0308 00:56:02.263534 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:56:02 crc kubenswrapper[4762]: E0308 00:56:02.264593 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:03 crc kubenswrapper[4762]: I0308 00:56:03.079501 4762 generic.go:334] "Generic (PLEG): container finished" podID="74c4248d-35d8-4ea7-9546-7665ea1c9f15" containerID="38562e5240f1d9cbadc63e799b4ae7ca7f43909095489832b193c2143799b657" exitCode=0 Mar 08 00:56:03 crc kubenswrapper[4762]: I0308 00:56:03.079620 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" event={"ID":"74c4248d-35d8-4ea7-9546-7665ea1c9f15","Type":"ContainerDied","Data":"38562e5240f1d9cbadc63e799b4ae7ca7f43909095489832b193c2143799b657"} Mar 08 00:56:04 crc kubenswrapper[4762]: I0308 00:56:04.483447 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:04 crc kubenswrapper[4762]: I0308 00:56:04.585927 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwtx4\" (UniqueName: \"kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4\") pod \"74c4248d-35d8-4ea7-9546-7665ea1c9f15\" (UID: \"74c4248d-35d8-4ea7-9546-7665ea1c9f15\") " Mar 08 00:56:04 crc kubenswrapper[4762]: I0308 00:56:04.596053 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4" (OuterVolumeSpecName: "kube-api-access-vwtx4") pod "74c4248d-35d8-4ea7-9546-7665ea1c9f15" (UID: "74c4248d-35d8-4ea7-9546-7665ea1c9f15"). InnerVolumeSpecName "kube-api-access-vwtx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:56:04 crc kubenswrapper[4762]: I0308 00:56:04.688003 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwtx4\" (UniqueName: \"kubernetes.io/projected/74c4248d-35d8-4ea7-9546-7665ea1c9f15-kube-api-access-vwtx4\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:05 crc kubenswrapper[4762]: I0308 00:56:05.105182 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" event={"ID":"74c4248d-35d8-4ea7-9546-7665ea1c9f15","Type":"ContainerDied","Data":"cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753"} Mar 08 00:56:05 crc kubenswrapper[4762]: I0308 00:56:05.105240 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf23344f56c171a6dddde63a3fc9ca174c307cb863a9c333269f88834582f753" Mar 08 00:56:05 crc kubenswrapper[4762]: I0308 00:56:05.105250 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548856-zpmvc" Mar 08 00:56:05 crc kubenswrapper[4762]: I0308 00:56:05.607404 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548850-fbqsx"] Mar 08 00:56:05 crc kubenswrapper[4762]: I0308 00:56:05.625036 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548850-fbqsx"] Mar 08 00:56:07 crc kubenswrapper[4762]: I0308 00:56:07.277374 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec41a8c-126a-4fb2-972f-bad18afb2398" path="/var/lib/kubelet/pods/1ec41a8c-126a-4fb2-972f-bad18afb2398/volumes" Mar 08 00:56:15 crc kubenswrapper[4762]: I0308 00:56:15.264002 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:56:15 crc kubenswrapper[4762]: E0308 00:56:15.264609 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:20 crc kubenswrapper[4762]: I0308 00:56:20.044480 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pd5j4"] Mar 08 00:56:20 crc kubenswrapper[4762]: I0308 00:56:20.063334 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pd5j4"] Mar 08 00:56:21 crc kubenswrapper[4762]: I0308 00:56:21.279871 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9551f6c-a71d-44b6-adb7-fe69e9c4f259" path="/var/lib/kubelet/pods/a9551f6c-a71d-44b6-adb7-fe69e9c4f259/volumes" Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.046262 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-smtf6"] Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.083734 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-nzmhb"] Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.102241 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-24e5-account-create-update-25drx"] Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.110829 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-smtf6"] Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.118669 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-nzmhb"] Mar 08 00:56:22 crc kubenswrapper[4762]: I0308 00:56:22.126264 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-24e5-account-create-update-25drx"] Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.040679 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-98ce-account-create-update-bz5cs"] Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.049935 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8f67-account-create-update-k6g6t"] Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.058020 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-98ce-account-create-update-bz5cs"] Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.067284 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8f67-account-create-update-k6g6t"] Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.285776 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a25763c-610d-42ef-ad0e-79c540318681" path="/var/lib/kubelet/pods/4a25763c-610d-42ef-ad0e-79c540318681/volumes" Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.286546 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fac32d-8f26-459b-a67e-592f1e292d80" path="/var/lib/kubelet/pods/a7fac32d-8f26-459b-a67e-592f1e292d80/volumes" Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.287347 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48c29ff-2a5a-4583-86a2-5550b8653bed" path="/var/lib/kubelet/pods/b48c29ff-2a5a-4583-86a2-5550b8653bed/volumes" Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.288066 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcd021de-3952-4553-ae54-5c244346412a" path="/var/lib/kubelet/pods/bcd021de-3952-4553-ae54-5c244346412a/volumes" Mar 08 00:56:23 crc kubenswrapper[4762]: I0308 00:56:23.289571 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f" path="/var/lib/kubelet/pods/e47b48cf-f7fc-4dd6-9c2f-cc436c5a949f/volumes" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.033306 4762 scope.go:117] "RemoveContainer" containerID="349d4e422e67335ccfdd9dcd824d014d8e2f4239f458dda690b7c9c13aedef44" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.068837 4762 scope.go:117] "RemoveContainer" containerID="6a57f6a58b9b035001322a51bdc185f7977090611bec71a327e59149b9bef547" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.142571 4762 scope.go:117] "RemoveContainer" containerID="000cb75f1ee491c6bd5d8e2cc10ed1b3e49115abb5148d46966fd22ac7f292d0" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.203726 4762 scope.go:117] "RemoveContainer" containerID="47bc565248bff0e09c955e1e7cb147379b58f8dc41cfb007b4daba23743668ba" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.263458 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:56:27 crc kubenswrapper[4762]: E0308 00:56:27.264148 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.296310 4762 scope.go:117] "RemoveContainer" containerID="5ecd17bbd9698099f53ea0255abee29bd4801c10345f01325c97e7143c4c19ea" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.337420 4762 scope.go:117] "RemoveContainer" containerID="76c8a4319c56c8e9d3fd72fde6141133739516507d6c80a407b949f3d39a6d3b" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.402030 4762 scope.go:117] "RemoveContainer" containerID="70cef9d6de50acbe20e8f771ae4b325213975dc24fb8027a301244fbf9ed2f37" Mar 08 00:56:27 crc kubenswrapper[4762]: I0308 00:56:27.430606 4762 scope.go:117] "RemoveContainer" containerID="ebe30539e9d64aa6b38e21717cada48cf84ff4cbd77750f5e048bfde986c5252" Mar 08 00:56:35 crc kubenswrapper[4762]: I0308 00:56:35.519816 4762 generic.go:334] "Generic (PLEG): container finished" podID="ba04a642-51e5-447c-b31c-fa0b5de485f0" containerID="5b0d377b56b84720b4914ca6ccfc7d4deb851450328e08083f2a31139a7fe615" exitCode=0 Mar 08 00:56:35 crc kubenswrapper[4762]: I0308 00:56:35.519944 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" event={"ID":"ba04a642-51e5-447c-b31c-fa0b5de485f0","Type":"ContainerDied","Data":"5b0d377b56b84720b4914ca6ccfc7d4deb851450328e08083f2a31139a7fe615"} Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.086559 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.191062 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam\") pod \"ba04a642-51e5-447c-b31c-fa0b5de485f0\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.191460 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6n88\" (UniqueName: \"kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88\") pod \"ba04a642-51e5-447c-b31c-fa0b5de485f0\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.191559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory\") pod \"ba04a642-51e5-447c-b31c-fa0b5de485f0\" (UID: \"ba04a642-51e5-447c-b31c-fa0b5de485f0\") " Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.198735 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88" (OuterVolumeSpecName: "kube-api-access-p6n88") pod "ba04a642-51e5-447c-b31c-fa0b5de485f0" (UID: "ba04a642-51e5-447c-b31c-fa0b5de485f0"). InnerVolumeSpecName "kube-api-access-p6n88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.230858 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ba04a642-51e5-447c-b31c-fa0b5de485f0" (UID: "ba04a642-51e5-447c-b31c-fa0b5de485f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.233075 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory" (OuterVolumeSpecName: "inventory") pod "ba04a642-51e5-447c-b31c-fa0b5de485f0" (UID: "ba04a642-51e5-447c-b31c-fa0b5de485f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.294700 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6n88\" (UniqueName: \"kubernetes.io/projected/ba04a642-51e5-447c-b31c-fa0b5de485f0-kube-api-access-p6n88\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.294741 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.294777 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ba04a642-51e5-447c-b31c-fa0b5de485f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.545428 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" event={"ID":"ba04a642-51e5-447c-b31c-fa0b5de485f0","Type":"ContainerDied","Data":"6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a"} Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.545482 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5098e4637e1239ed39b6a940c812b44f9ae6b544092a698f8fa970aa29ab1a" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.545564 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.756481 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6rgx8"] Mar 08 00:56:37 crc kubenswrapper[4762]: E0308 00:56:37.756891 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba04a642-51e5-447c-b31c-fa0b5de485f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.756905 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba04a642-51e5-447c-b31c-fa0b5de485f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:56:37 crc kubenswrapper[4762]: E0308 00:56:37.756951 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c4248d-35d8-4ea7-9546-7665ea1c9f15" containerName="oc" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.756956 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c4248d-35d8-4ea7-9546-7665ea1c9f15" containerName="oc" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.757134 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c4248d-35d8-4ea7-9546-7665ea1c9f15" containerName="oc" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.757152 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba04a642-51e5-447c-b31c-fa0b5de485f0" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.757780 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.759636 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.762377 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.762435 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.762527 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.775518 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6rgx8"] Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.812208 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.812443 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmjrl\" (UniqueName: \"kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.812580 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.915522 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmjrl\" (UniqueName: \"kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.915675 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.915790 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.920276 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.921064 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:37 crc kubenswrapper[4762]: I0308 00:56:37.943077 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmjrl\" (UniqueName: \"kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl\") pod \"ssh-known-hosts-edpm-deployment-6rgx8\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:38 crc kubenswrapper[4762]: I0308 00:56:38.085621 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:38 crc kubenswrapper[4762]: W0308 00:56:38.709894 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd45e1aae_1e35_43f5_95bb_b9bc4750eb9c.slice/crio-31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe WatchSource:0}: Error finding container 31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe: Status 404 returned error can't find the container with id 31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe Mar 08 00:56:38 crc kubenswrapper[4762]: I0308 00:56:38.714306 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6rgx8"] Mar 08 00:56:39 crc kubenswrapper[4762]: I0308 00:56:39.572176 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" event={"ID":"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c","Type":"ContainerStarted","Data":"7f579d220af391252c54aa64ff17a3692d7c5a59e5186291213fe81aa0af5c1c"} Mar 08 00:56:39 crc kubenswrapper[4762]: I0308 00:56:39.572724 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" event={"ID":"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c","Type":"ContainerStarted","Data":"31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe"} Mar 08 00:56:39 crc kubenswrapper[4762]: I0308 00:56:39.593684 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" podStartSLOduration=2.082060691 podStartE2EDuration="2.593665063s" podCreationTimestamp="2026-03-08 00:56:37 +0000 UTC" firstStartedPulling="2026-03-08 00:56:38.713082217 +0000 UTC m=+2020.187226571" lastFinishedPulling="2026-03-08 00:56:39.224686609 +0000 UTC m=+2020.698830943" observedRunningTime="2026-03-08 00:56:39.585192893 +0000 UTC m=+2021.059337237" watchObservedRunningTime="2026-03-08 00:56:39.593665063 +0000 UTC m=+2021.067809407" Mar 08 00:56:42 crc kubenswrapper[4762]: I0308 00:56:42.263595 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:56:42 crc kubenswrapper[4762]: E0308 00:56:42.264546 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:46 crc kubenswrapper[4762]: I0308 00:56:46.655108 4762 generic.go:334] "Generic (PLEG): container finished" podID="d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" containerID="7f579d220af391252c54aa64ff17a3692d7c5a59e5186291213fe81aa0af5c1c" exitCode=0 Mar 08 00:56:46 crc kubenswrapper[4762]: I0308 00:56:46.655230 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" event={"ID":"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c","Type":"ContainerDied","Data":"7f579d220af391252c54aa64ff17a3692d7c5a59e5186291213fe81aa0af5c1c"} Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.263692 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.421329 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmjrl\" (UniqueName: \"kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl\") pod \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.421417 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam\") pod \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.421592 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0\") pod \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\" (UID: \"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c\") " Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.427409 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl" (OuterVolumeSpecName: "kube-api-access-lmjrl") pod "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" (UID: "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c"). InnerVolumeSpecName "kube-api-access-lmjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.458868 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" (UID: "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.473419 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" (UID: "d45e1aae-1e35-43f5-95bb-b9bc4750eb9c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.524329 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.524365 4762 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.524377 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmjrl\" (UniqueName: \"kubernetes.io/projected/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c-kube-api-access-lmjrl\") on node \"crc\" DevicePath \"\"" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.684102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" event={"ID":"d45e1aae-1e35-43f5-95bb-b9bc4750eb9c","Type":"ContainerDied","Data":"31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe"} Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.684179 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a34e84db0c40de3db91329da0da9975a0a1aba32369bbd8b7caab2c8ce16fe" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.684203 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-6rgx8" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.814543 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f"] Mar 08 00:56:48 crc kubenswrapper[4762]: E0308 00:56:48.815112 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" containerName="ssh-known-hosts-edpm-deployment" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.815131 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" containerName="ssh-known-hosts-edpm-deployment" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.815316 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" containerName="ssh-known-hosts-edpm-deployment" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.816121 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.823554 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.823655 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.824515 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.825005 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.832260 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f"] Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.931678 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2thxg\" (UniqueName: \"kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.931717 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:48 crc kubenswrapper[4762]: I0308 00:56:48.932188 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.034795 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.035141 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2thxg\" (UniqueName: \"kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.035205 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.039509 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.039888 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.053363 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2thxg\" (UniqueName: \"kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-lcg5f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.147021 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:56:49 crc kubenswrapper[4762]: I0308 00:56:49.824386 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f"] Mar 08 00:56:50 crc kubenswrapper[4762]: I0308 00:56:50.711045 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" event={"ID":"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f","Type":"ContainerStarted","Data":"058ad8c33b3f2044d297935fdeb9363ebf7e50375dc481e60b982235ab041ac8"} Mar 08 00:56:50 crc kubenswrapper[4762]: I0308 00:56:50.711140 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" event={"ID":"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f","Type":"ContainerStarted","Data":"7c451867e4102e23219322e5a9a77533170c444cebf1f653f67044c12d6bf147"} Mar 08 00:56:50 crc kubenswrapper[4762]: I0308 00:56:50.738318 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" podStartSLOduration=2.159159895 podStartE2EDuration="2.738291259s" podCreationTimestamp="2026-03-08 00:56:48 +0000 UTC" firstStartedPulling="2026-03-08 00:56:49.82340867 +0000 UTC m=+2031.297553024" lastFinishedPulling="2026-03-08 00:56:50.402540004 +0000 UTC m=+2031.876684388" observedRunningTime="2026-03-08 00:56:50.733648786 +0000 UTC m=+2032.207793140" watchObservedRunningTime="2026-03-08 00:56:50.738291259 +0000 UTC m=+2032.212435613" Mar 08 00:56:54 crc kubenswrapper[4762]: I0308 00:56:54.070515 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zcqpb"] Mar 08 00:56:54 crc kubenswrapper[4762]: I0308 00:56:54.093026 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zcqpb"] Mar 08 00:56:55 crc kubenswrapper[4762]: I0308 00:56:55.279162 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0922e07c-7b7b-4e78-98f0-19238b92ef5c" path="/var/lib/kubelet/pods/0922e07c-7b7b-4e78-98f0-19238b92ef5c/volumes" Mar 08 00:56:57 crc kubenswrapper[4762]: I0308 00:56:57.264670 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:56:57 crc kubenswrapper[4762]: E0308 00:56:57.265204 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:56:58 crc kubenswrapper[4762]: I0308 00:56:58.817501 4762 generic.go:334] "Generic (PLEG): container finished" podID="fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" containerID="058ad8c33b3f2044d297935fdeb9363ebf7e50375dc481e60b982235ab041ac8" exitCode=0 Mar 08 00:56:58 crc kubenswrapper[4762]: I0308 00:56:58.817612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" event={"ID":"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f","Type":"ContainerDied","Data":"058ad8c33b3f2044d297935fdeb9363ebf7e50375dc481e60b982235ab041ac8"} Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.257116 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.356174 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory\") pod \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.356644 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2thxg\" (UniqueName: \"kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg\") pod \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.356863 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam\") pod \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\" (UID: \"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f\") " Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.362446 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg" (OuterVolumeSpecName: "kube-api-access-2thxg") pod "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" (UID: "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f"). InnerVolumeSpecName "kube-api-access-2thxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.387944 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" (UID: "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.395892 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory" (OuterVolumeSpecName: "inventory") pod "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" (UID: "fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.460308 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.460346 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.460385 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2thxg\" (UniqueName: \"kubernetes.io/projected/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f-kube-api-access-2thxg\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.846383 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" event={"ID":"fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f","Type":"ContainerDied","Data":"7c451867e4102e23219322e5a9a77533170c444cebf1f653f67044c12d6bf147"} Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.846429 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c451867e4102e23219322e5a9a77533170c444cebf1f653f67044c12d6bf147" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.846466 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.918970 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql"] Mar 08 00:57:00 crc kubenswrapper[4762]: E0308 00:57:00.919541 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.919561 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.919906 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.920856 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.923524 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.923619 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.926012 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.926187 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:57:00 crc kubenswrapper[4762]: I0308 00:57:00.930704 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql"] Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.071051 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.071114 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4fk8\" (UniqueName: \"kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.071278 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.173635 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.174024 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4fk8\" (UniqueName: \"kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.174078 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.178423 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.178517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.195939 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4fk8\" (UniqueName: \"kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:01 crc kubenswrapper[4762]: I0308 00:57:01.280666 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:02 crc kubenswrapper[4762]: I0308 00:57:02.567128 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql"] Mar 08 00:57:02 crc kubenswrapper[4762]: I0308 00:57:02.870842 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" event={"ID":"27822805-9c82-4baf-b7ce-b0c00c0e335b","Type":"ContainerStarted","Data":"9f0a500f24a87e0f38411173a85e7c9129d113dbf77c6ea9fb5c0e4ce16fdf48"} Mar 08 00:57:03 crc kubenswrapper[4762]: I0308 00:57:03.887904 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" event={"ID":"27822805-9c82-4baf-b7ce-b0c00c0e335b","Type":"ContainerStarted","Data":"95d0c771f33c0d02f30b67f5a10c306de01d2673ecb0d4aeaa23155117a627b0"} Mar 08 00:57:03 crc kubenswrapper[4762]: I0308 00:57:03.917210 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" podStartSLOduration=3.4771237 podStartE2EDuration="3.917172486s" podCreationTimestamp="2026-03-08 00:57:00 +0000 UTC" firstStartedPulling="2026-03-08 00:57:02.575458337 +0000 UTC m=+2044.049602681" lastFinishedPulling="2026-03-08 00:57:03.015507093 +0000 UTC m=+2044.489651467" observedRunningTime="2026-03-08 00:57:03.909067147 +0000 UTC m=+2045.383211491" watchObservedRunningTime="2026-03-08 00:57:03.917172486 +0000 UTC m=+2045.391316880" Mar 08 00:57:10 crc kubenswrapper[4762]: I0308 00:57:10.264237 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:57:10 crc kubenswrapper[4762]: E0308 00:57:10.265370 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:57:13 crc kubenswrapper[4762]: I0308 00:57:13.015319 4762 generic.go:334] "Generic (PLEG): container finished" podID="27822805-9c82-4baf-b7ce-b0c00c0e335b" containerID="95d0c771f33c0d02f30b67f5a10c306de01d2673ecb0d4aeaa23155117a627b0" exitCode=0 Mar 08 00:57:13 crc kubenswrapper[4762]: I0308 00:57:13.015468 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" event={"ID":"27822805-9c82-4baf-b7ce-b0c00c0e335b","Type":"ContainerDied","Data":"95d0c771f33c0d02f30b67f5a10c306de01d2673ecb0d4aeaa23155117a627b0"} Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.590182 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.793736 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory\") pod \"27822805-9c82-4baf-b7ce-b0c00c0e335b\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.793927 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam\") pod \"27822805-9c82-4baf-b7ce-b0c00c0e335b\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.793979 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4fk8\" (UniqueName: \"kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8\") pod \"27822805-9c82-4baf-b7ce-b0c00c0e335b\" (UID: \"27822805-9c82-4baf-b7ce-b0c00c0e335b\") " Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.801276 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8" (OuterVolumeSpecName: "kube-api-access-s4fk8") pod "27822805-9c82-4baf-b7ce-b0c00c0e335b" (UID: "27822805-9c82-4baf-b7ce-b0c00c0e335b"). InnerVolumeSpecName "kube-api-access-s4fk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.824723 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory" (OuterVolumeSpecName: "inventory") pod "27822805-9c82-4baf-b7ce-b0c00c0e335b" (UID: "27822805-9c82-4baf-b7ce-b0c00c0e335b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.838930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "27822805-9c82-4baf-b7ce-b0c00c0e335b" (UID: "27822805-9c82-4baf-b7ce-b0c00c0e335b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.895305 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.895336 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/27822805-9c82-4baf-b7ce-b0c00c0e335b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:14 crc kubenswrapper[4762]: I0308 00:57:14.895347 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4fk8\" (UniqueName: \"kubernetes.io/projected/27822805-9c82-4baf-b7ce-b0c00c0e335b-kube-api-access-s4fk8\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.038365 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" event={"ID":"27822805-9c82-4baf-b7ce-b0c00c0e335b","Type":"ContainerDied","Data":"9f0a500f24a87e0f38411173a85e7c9129d113dbf77c6ea9fb5c0e4ce16fdf48"} Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.038678 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0a500f24a87e0f38411173a85e7c9129d113dbf77c6ea9fb5c0e4ce16fdf48" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.038449 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.145637 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc"] Mar 08 00:57:15 crc kubenswrapper[4762]: E0308 00:57:15.146139 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27822805-9c82-4baf-b7ce-b0c00c0e335b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.146161 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="27822805-9c82-4baf-b7ce-b0c00c0e335b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.146372 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="27822805-9c82-4baf-b7ce-b0c00c0e335b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.147145 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.149775 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.158101 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.158706 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.159031 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.159730 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.161439 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.161586 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.161717 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.185736 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc"] Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202507 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202588 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202630 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202656 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202739 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202809 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.202902 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203096 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203201 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203444 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203604 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbjtc\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203652 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.203788 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.307894 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308310 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308445 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308567 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308681 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308803 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.308938 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309018 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309125 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309207 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309277 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309444 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbjtc\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.309492 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.313380 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.314001 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.314560 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.314851 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.314923 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.315048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.315789 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.316837 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.318981 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.319838 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.325149 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.326489 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.335734 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbjtc\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:15 crc kubenswrapper[4762]: I0308 00:57:15.467048 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:16 crc kubenswrapper[4762]: I0308 00:57:16.028243 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc"] Mar 08 00:57:16 crc kubenswrapper[4762]: I0308 00:57:16.056030 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" event={"ID":"4088c015-f583-40c0-be7c-2ee7305a0dcc","Type":"ContainerStarted","Data":"e53d22093f2fe48064f96435d73942d23845b730e1a27ae0f7b0f87a698ff20c"} Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.089817 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-5xq4s"] Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.095019 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" event={"ID":"4088c015-f583-40c0-be7c-2ee7305a0dcc","Type":"ContainerStarted","Data":"5605e7c042eea99fd7e944ec47c882b2a0e8acee46eefa2adc5469b69bdea8db"} Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.105031 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-6crtj"] Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.121352 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-5xq4s"] Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.130455 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-6crtj"] Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.144283 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" podStartSLOduration=1.70301002 podStartE2EDuration="2.144259492s" podCreationTimestamp="2026-03-08 00:57:15 +0000 UTC" firstStartedPulling="2026-03-08 00:57:16.039714503 +0000 UTC m=+2057.513858857" lastFinishedPulling="2026-03-08 00:57:16.480963955 +0000 UTC m=+2057.955108329" observedRunningTime="2026-03-08 00:57:17.1135734 +0000 UTC m=+2058.587717754" watchObservedRunningTime="2026-03-08 00:57:17.144259492 +0000 UTC m=+2058.618403826" Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.298581 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f3ba52d-9795-4455-bc4a-2469ed8b73df" path="/var/lib/kubelet/pods/7f3ba52d-9795-4455-bc4a-2469ed8b73df/volumes" Mar 08 00:57:17 crc kubenswrapper[4762]: I0308 00:57:17.299217 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a237e39c-dfeb-490d-a675-88175cb7f0fb" path="/var/lib/kubelet/pods/a237e39c-dfeb-490d-a675-88175cb7f0fb/volumes" Mar 08 00:57:18 crc kubenswrapper[4762]: I0308 00:57:18.044615 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-2ad8-account-create-update-b6dhz"] Mar 08 00:57:18 crc kubenswrapper[4762]: I0308 00:57:18.063370 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-2ad8-account-create-update-b6dhz"] Mar 08 00:57:19 crc kubenswrapper[4762]: I0308 00:57:19.277026 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06e208b1-f109-4635-89ac-399a6421162f" path="/var/lib/kubelet/pods/06e208b1-f109-4635-89ac-399a6421162f/volumes" Mar 08 00:57:22 crc kubenswrapper[4762]: I0308 00:57:22.045991 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4b7h8"] Mar 08 00:57:22 crc kubenswrapper[4762]: I0308 00:57:22.058503 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-4b7h8"] Mar 08 00:57:23 crc kubenswrapper[4762]: I0308 00:57:23.277669 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56650c9a-96a5-4911-8775-8f4c3013053f" path="/var/lib/kubelet/pods/56650c9a-96a5-4911-8775-8f4c3013053f/volumes" Mar 08 00:57:25 crc kubenswrapper[4762]: I0308 00:57:25.264529 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:57:25 crc kubenswrapper[4762]: E0308 00:57:25.265413 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:57:27 crc kubenswrapper[4762]: I0308 00:57:27.614706 4762 scope.go:117] "RemoveContainer" containerID="0ad2e2fbc4ee99bb337b6dd9ee75ad05b741c7275b6219774e93d5804e269ede" Mar 08 00:57:27 crc kubenswrapper[4762]: I0308 00:57:27.687981 4762 scope.go:117] "RemoveContainer" containerID="eb5ae3d7ced2c09c8761cfb2a3024d0c4c65f6282876cf18973527c999b70a4a" Mar 08 00:57:27 crc kubenswrapper[4762]: I0308 00:57:27.744575 4762 scope.go:117] "RemoveContainer" containerID="70b7ff0ea294294e5fa962593375dd3d535116656c1b45e5a77dd70b2ae9f521" Mar 08 00:57:27 crc kubenswrapper[4762]: I0308 00:57:27.816170 4762 scope.go:117] "RemoveContainer" containerID="5958aab0df1a8a102d6300d1bded727c3f31794d16f1f8e15e787aa7ef539b60" Mar 08 00:57:27 crc kubenswrapper[4762]: I0308 00:57:27.866299 4762 scope.go:117] "RemoveContainer" containerID="d5bf44ad97bd61215d9f26ce45315f577fd75b93b1a9b9a3bb113b104a998eab" Mar 08 00:57:39 crc kubenswrapper[4762]: I0308 00:57:39.270070 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:57:39 crc kubenswrapper[4762]: E0308 00:57:39.271206 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.683073 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.686349 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.697630 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.790651 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.791197 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l29h\" (UniqueName: \"kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.791297 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.894722 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l29h\" (UniqueName: \"kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.894869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.894958 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.895357 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.895452 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:41 crc kubenswrapper[4762]: I0308 00:57:41.932256 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l29h\" (UniqueName: \"kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h\") pod \"redhat-operators-n7bzq\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:42 crc kubenswrapper[4762]: I0308 00:57:42.035382 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:42 crc kubenswrapper[4762]: I0308 00:57:42.591546 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:57:42 crc kubenswrapper[4762]: W0308 00:57:42.605257 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34b41a4f_15ba_4f85_9a0d_96c077935d00.slice/crio-33330dfcd0709e64a824a80925a206864cd04df8eee964c130f7e4ca414c1583 WatchSource:0}: Error finding container 33330dfcd0709e64a824a80925a206864cd04df8eee964c130f7e4ca414c1583: Status 404 returned error can't find the container with id 33330dfcd0709e64a824a80925a206864cd04df8eee964c130f7e4ca414c1583 Mar 08 00:57:43 crc kubenswrapper[4762]: I0308 00:57:43.436950 4762 generic.go:334] "Generic (PLEG): container finished" podID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerID="4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0" exitCode=0 Mar 08 00:57:43 crc kubenswrapper[4762]: I0308 00:57:43.437167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerDied","Data":"4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0"} Mar 08 00:57:43 crc kubenswrapper[4762]: I0308 00:57:43.437245 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerStarted","Data":"33330dfcd0709e64a824a80925a206864cd04df8eee964c130f7e4ca414c1583"} Mar 08 00:57:45 crc kubenswrapper[4762]: I0308 00:57:45.466095 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerStarted","Data":"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d"} Mar 08 00:57:49 crc kubenswrapper[4762]: I0308 00:57:49.509903 4762 generic.go:334] "Generic (PLEG): container finished" podID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerID="1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d" exitCode=0 Mar 08 00:57:49 crc kubenswrapper[4762]: I0308 00:57:49.509956 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerDied","Data":"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d"} Mar 08 00:57:50 crc kubenswrapper[4762]: I0308 00:57:50.527179 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerStarted","Data":"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f"} Mar 08 00:57:52 crc kubenswrapper[4762]: I0308 00:57:52.036307 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:52 crc kubenswrapper[4762]: I0308 00:57:52.036610 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:57:52 crc kubenswrapper[4762]: I0308 00:57:52.263909 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 00:57:53 crc kubenswrapper[4762]: I0308 00:57:53.094233 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7bzq" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" probeResult="failure" output=< Mar 08 00:57:53 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:57:53 crc kubenswrapper[4762]: > Mar 08 00:57:53 crc kubenswrapper[4762]: I0308 00:57:53.564339 4762 generic.go:334] "Generic (PLEG): container finished" podID="4088c015-f583-40c0-be7c-2ee7305a0dcc" containerID="5605e7c042eea99fd7e944ec47c882b2a0e8acee46eefa2adc5469b69bdea8db" exitCode=0 Mar 08 00:57:53 crc kubenswrapper[4762]: I0308 00:57:53.564749 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" event={"ID":"4088c015-f583-40c0-be7c-2ee7305a0dcc","Type":"ContainerDied","Data":"5605e7c042eea99fd7e944ec47c882b2a0e8acee46eefa2adc5469b69bdea8db"} Mar 08 00:57:53 crc kubenswrapper[4762]: I0308 00:57:53.569784 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add"} Mar 08 00:57:53 crc kubenswrapper[4762]: I0308 00:57:53.591944 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n7bzq" podStartSLOduration=6.050045937 podStartE2EDuration="12.591925502s" podCreationTimestamp="2026-03-08 00:57:41 +0000 UTC" firstStartedPulling="2026-03-08 00:57:43.438654726 +0000 UTC m=+2084.912799070" lastFinishedPulling="2026-03-08 00:57:49.980534281 +0000 UTC m=+2091.454678635" observedRunningTime="2026-03-08 00:57:50.556613329 +0000 UTC m=+2092.030757773" watchObservedRunningTime="2026-03-08 00:57:53.591925502 +0000 UTC m=+2095.066069846" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.186401 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222053 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222115 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222200 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222246 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222284 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222382 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222416 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222458 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222488 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222531 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222549 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222582 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbjtc\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.222617 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle\") pod \"4088c015-f583-40c0-be7c-2ee7305a0dcc\" (UID: \"4088c015-f583-40c0-be7c-2ee7305a0dcc\") " Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.244647 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.245611 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.245629 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248008 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248019 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248036 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248075 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248091 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc" (OuterVolumeSpecName: "kube-api-access-fbjtc") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "kube-api-access-fbjtc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248147 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.248489 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.249164 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.301263 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.312871 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory" (OuterVolumeSpecName: "inventory") pod "4088c015-f583-40c0-be7c-2ee7305a0dcc" (UID: "4088c015-f583-40c0-be7c-2ee7305a0dcc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326267 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326316 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326328 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326339 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326353 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326368 4762 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326378 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326389 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326402 4762 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326414 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326424 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326435 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbjtc\" (UniqueName: \"kubernetes.io/projected/4088c015-f583-40c0-be7c-2ee7305a0dcc-kube-api-access-fbjtc\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.326446 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4088c015-f583-40c0-be7c-2ee7305a0dcc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.585819 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" event={"ID":"4088c015-f583-40c0-be7c-2ee7305a0dcc","Type":"ContainerDied","Data":"e53d22093f2fe48064f96435d73942d23845b730e1a27ae0f7b0f87a698ff20c"} Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.586131 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53d22093f2fe48064f96435d73942d23845b730e1a27ae0f7b0f87a698ff20c" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.585873 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.706392 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk"] Mar 08 00:57:55 crc kubenswrapper[4762]: E0308 00:57:55.706789 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4088c015-f583-40c0-be7c-2ee7305a0dcc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.706805 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4088c015-f583-40c0-be7c-2ee7305a0dcc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.707009 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4088c015-f583-40c0-be7c-2ee7305a0dcc" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.707671 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.709890 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.710501 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.711962 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.712772 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.715882 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.726480 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk"] Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.747695 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.747822 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.747915 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.747937 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l47c\" (UniqueName: \"kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.747961 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.850742 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.850898 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.851048 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.851087 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l47c\" (UniqueName: \"kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.851120 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.851829 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.856374 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.856478 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.856983 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:55 crc kubenswrapper[4762]: I0308 00:57:55.872623 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l47c\" (UniqueName: \"kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-9n4jk\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:56 crc kubenswrapper[4762]: I0308 00:57:56.030856 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:57:56 crc kubenswrapper[4762]: I0308 00:57:56.654069 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk"] Mar 08 00:57:56 crc kubenswrapper[4762]: W0308 00:57:56.668856 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod772ef692_515e_40a3_b0c7_f3f78e3620c1.slice/crio-226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb WatchSource:0}: Error finding container 226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb: Status 404 returned error can't find the container with id 226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb Mar 08 00:57:57 crc kubenswrapper[4762]: I0308 00:57:57.604701 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" event={"ID":"772ef692-515e-40a3-b0c7-f3f78e3620c1","Type":"ContainerStarted","Data":"11a0161501ea83561acb466435b786fcc1549249d9a9203eef101a74e84b87aa"} Mar 08 00:57:57 crc kubenswrapper[4762]: I0308 00:57:57.605051 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" event={"ID":"772ef692-515e-40a3-b0c7-f3f78e3620c1","Type":"ContainerStarted","Data":"226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb"} Mar 08 00:57:57 crc kubenswrapper[4762]: I0308 00:57:57.626717 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" podStartSLOduration=2.158253383 podStartE2EDuration="2.626697534s" podCreationTimestamp="2026-03-08 00:57:55 +0000 UTC" firstStartedPulling="2026-03-08 00:57:56.675849067 +0000 UTC m=+2098.149993411" lastFinishedPulling="2026-03-08 00:57:57.144293208 +0000 UTC m=+2098.618437562" observedRunningTime="2026-03-08 00:57:57.621474635 +0000 UTC m=+2099.095618979" watchObservedRunningTime="2026-03-08 00:57:57.626697534 +0000 UTC m=+2099.100841878" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.144388 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548858-zgd89"] Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.147335 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.159637 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548858-zgd89"] Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.178810 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.179073 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.179233 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.245788 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bj6v\" (UniqueName: \"kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v\") pod \"auto-csr-approver-29548858-zgd89\" (UID: \"fbf0114e-ac57-40f9-b3d7-219e71955cab\") " pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.347129 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bj6v\" (UniqueName: \"kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v\") pod \"auto-csr-approver-29548858-zgd89\" (UID: \"fbf0114e-ac57-40f9-b3d7-219e71955cab\") " pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.366297 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bj6v\" (UniqueName: \"kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v\") pod \"auto-csr-approver-29548858-zgd89\" (UID: \"fbf0114e-ac57-40f9-b3d7-219e71955cab\") " pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.490735 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:00 crc kubenswrapper[4762]: I0308 00:58:00.961750 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548858-zgd89"] Mar 08 00:58:01 crc kubenswrapper[4762]: I0308 00:58:01.673799 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548858-zgd89" event={"ID":"fbf0114e-ac57-40f9-b3d7-219e71955cab","Type":"ContainerStarted","Data":"70be0edac0dbc4f61ccdcede59ee91e2f7a61959817add32efb21ca05a669f71"} Mar 08 00:58:02 crc kubenswrapper[4762]: I0308 00:58:02.684131 4762 generic.go:334] "Generic (PLEG): container finished" podID="fbf0114e-ac57-40f9-b3d7-219e71955cab" containerID="6950c9cce05d665407a8c8c606f58c8ee05dcb74bfc732e0db4749be1f0daf41" exitCode=0 Mar 08 00:58:02 crc kubenswrapper[4762]: I0308 00:58:02.684208 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548858-zgd89" event={"ID":"fbf0114e-ac57-40f9-b3d7-219e71955cab","Type":"ContainerDied","Data":"6950c9cce05d665407a8c8c606f58c8ee05dcb74bfc732e0db4749be1f0daf41"} Mar 08 00:58:03 crc kubenswrapper[4762]: I0308 00:58:03.066155 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-8694c"] Mar 08 00:58:03 crc kubenswrapper[4762]: I0308 00:58:03.086578 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-8694c"] Mar 08 00:58:03 crc kubenswrapper[4762]: I0308 00:58:03.110937 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n7bzq" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" probeResult="failure" output=< Mar 08 00:58:03 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 00:58:03 crc kubenswrapper[4762]: > Mar 08 00:58:03 crc kubenswrapper[4762]: I0308 00:58:03.281712 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03cb9dfe-119e-4bce-808a-375258d654d7" path="/var/lib/kubelet/pods/03cb9dfe-119e-4bce-808a-375258d654d7/volumes" Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.071173 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.141897 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bj6v\" (UniqueName: \"kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v\") pod \"fbf0114e-ac57-40f9-b3d7-219e71955cab\" (UID: \"fbf0114e-ac57-40f9-b3d7-219e71955cab\") " Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.147720 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v" (OuterVolumeSpecName: "kube-api-access-9bj6v") pod "fbf0114e-ac57-40f9-b3d7-219e71955cab" (UID: "fbf0114e-ac57-40f9-b3d7-219e71955cab"). InnerVolumeSpecName "kube-api-access-9bj6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.245059 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bj6v\" (UniqueName: \"kubernetes.io/projected/fbf0114e-ac57-40f9-b3d7-219e71955cab-kube-api-access-9bj6v\") on node \"crc\" DevicePath \"\"" Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.702493 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548858-zgd89" event={"ID":"fbf0114e-ac57-40f9-b3d7-219e71955cab","Type":"ContainerDied","Data":"70be0edac0dbc4f61ccdcede59ee91e2f7a61959817add32efb21ca05a669f71"} Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.702742 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70be0edac0dbc4f61ccdcede59ee91e2f7a61959817add32efb21ca05a669f71" Mar 08 00:58:04 crc kubenswrapper[4762]: I0308 00:58:04.702540 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548858-zgd89" Mar 08 00:58:05 crc kubenswrapper[4762]: I0308 00:58:05.134044 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548852-x2bd8"] Mar 08 00:58:05 crc kubenswrapper[4762]: I0308 00:58:05.155836 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548852-x2bd8"] Mar 08 00:58:05 crc kubenswrapper[4762]: I0308 00:58:05.281326 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ddcaece-8907-4f18-b7a7-ab8def2796cd" path="/var/lib/kubelet/pods/9ddcaece-8907-4f18-b7a7-ab8def2796cd/volumes" Mar 08 00:58:12 crc kubenswrapper[4762]: I0308 00:58:12.087491 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:58:12 crc kubenswrapper[4762]: I0308 00:58:12.143264 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:58:12 crc kubenswrapper[4762]: I0308 00:58:12.882084 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:58:13 crc kubenswrapper[4762]: I0308 00:58:13.818520 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n7bzq" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" containerID="cri-o://63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f" gracePeriod=2 Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.368842 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.504930 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content\") pod \"34b41a4f-15ba-4f85-9a0d-96c077935d00\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.505363 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l29h\" (UniqueName: \"kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h\") pod \"34b41a4f-15ba-4f85-9a0d-96c077935d00\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.505563 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities\") pod \"34b41a4f-15ba-4f85-9a0d-96c077935d00\" (UID: \"34b41a4f-15ba-4f85-9a0d-96c077935d00\") " Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.507523 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities" (OuterVolumeSpecName: "utilities") pod "34b41a4f-15ba-4f85-9a0d-96c077935d00" (UID: "34b41a4f-15ba-4f85-9a0d-96c077935d00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.514046 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h" (OuterVolumeSpecName: "kube-api-access-9l29h") pod "34b41a4f-15ba-4f85-9a0d-96c077935d00" (UID: "34b41a4f-15ba-4f85-9a0d-96c077935d00"). InnerVolumeSpecName "kube-api-access-9l29h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.609163 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l29h\" (UniqueName: \"kubernetes.io/projected/34b41a4f-15ba-4f85-9a0d-96c077935d00-kube-api-access-9l29h\") on node \"crc\" DevicePath \"\"" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.609227 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.657149 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34b41a4f-15ba-4f85-9a0d-96c077935d00" (UID: "34b41a4f-15ba-4f85-9a0d-96c077935d00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.710830 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34b41a4f-15ba-4f85-9a0d-96c077935d00-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.829295 4762 generic.go:334] "Generic (PLEG): container finished" podID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerID="63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f" exitCode=0 Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.829392 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n7bzq" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.829398 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerDied","Data":"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f"} Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.830212 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n7bzq" event={"ID":"34b41a4f-15ba-4f85-9a0d-96c077935d00","Type":"ContainerDied","Data":"33330dfcd0709e64a824a80925a206864cd04df8eee964c130f7e4ca414c1583"} Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.830236 4762 scope.go:117] "RemoveContainer" containerID="63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.860829 4762 scope.go:117] "RemoveContainer" containerID="1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.865169 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.878611 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n7bzq"] Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.894634 4762 scope.go:117] "RemoveContainer" containerID="4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.927978 4762 scope.go:117] "RemoveContainer" containerID="63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f" Mar 08 00:58:14 crc kubenswrapper[4762]: E0308 00:58:14.928390 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f\": container with ID starting with 63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f not found: ID does not exist" containerID="63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.928446 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f"} err="failed to get container status \"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f\": rpc error: code = NotFound desc = could not find container \"63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f\": container with ID starting with 63f9f70e8fb4501d82d6d4d9b0b285f6bf50e487d35c591ca84a4b27ec60bd7f not found: ID does not exist" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.928482 4762 scope.go:117] "RemoveContainer" containerID="1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d" Mar 08 00:58:14 crc kubenswrapper[4762]: E0308 00:58:14.928945 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d\": container with ID starting with 1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d not found: ID does not exist" containerID="1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.928983 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d"} err="failed to get container status \"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d\": rpc error: code = NotFound desc = could not find container \"1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d\": container with ID starting with 1707c74e9518223717f176aed4e770a30834a8588b3ce839bd8597461660bb0d not found: ID does not exist" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.929012 4762 scope.go:117] "RemoveContainer" containerID="4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0" Mar 08 00:58:14 crc kubenswrapper[4762]: E0308 00:58:14.929306 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0\": container with ID starting with 4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0 not found: ID does not exist" containerID="4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0" Mar 08 00:58:14 crc kubenswrapper[4762]: I0308 00:58:14.929323 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0"} err="failed to get container status \"4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0\": rpc error: code = NotFound desc = could not find container \"4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0\": container with ID starting with 4362e55741f19ba8d76a983acd64d3c9d24ef06080d80e90479afd506802afa0 not found: ID does not exist" Mar 08 00:58:15 crc kubenswrapper[4762]: I0308 00:58:15.280258 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" path="/var/lib/kubelet/pods/34b41a4f-15ba-4f85-9a0d-96c077935d00/volumes" Mar 08 00:58:28 crc kubenswrapper[4762]: I0308 00:58:28.022015 4762 scope.go:117] "RemoveContainer" containerID="4a7aecbc7b316b35babbcff875b1276ae13693d361452da3afa93603f93319b9" Mar 08 00:58:28 crc kubenswrapper[4762]: I0308 00:58:28.411062 4762 scope.go:117] "RemoveContainer" containerID="d036e39e7bd43f13739d4c1e24070d33cf6e969f28a898534c4d38a015f1f7f4" Mar 08 00:59:09 crc kubenswrapper[4762]: I0308 00:59:09.454746 4762 generic.go:334] "Generic (PLEG): container finished" podID="772ef692-515e-40a3-b0c7-f3f78e3620c1" containerID="11a0161501ea83561acb466435b786fcc1549249d9a9203eef101a74e84b87aa" exitCode=0 Mar 08 00:59:09 crc kubenswrapper[4762]: I0308 00:59:09.454880 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" event={"ID":"772ef692-515e-40a3-b0c7-f3f78e3620c1","Type":"ContainerDied","Data":"11a0161501ea83561acb466435b786fcc1549249d9a9203eef101a74e84b87aa"} Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.042133 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.168073 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0\") pod \"772ef692-515e-40a3-b0c7-f3f78e3620c1\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.168273 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle\") pod \"772ef692-515e-40a3-b0c7-f3f78e3620c1\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.168311 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l47c\" (UniqueName: \"kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c\") pod \"772ef692-515e-40a3-b0c7-f3f78e3620c1\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.168467 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory\") pod \"772ef692-515e-40a3-b0c7-f3f78e3620c1\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.168522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam\") pod \"772ef692-515e-40a3-b0c7-f3f78e3620c1\" (UID: \"772ef692-515e-40a3-b0c7-f3f78e3620c1\") " Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.174888 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c" (OuterVolumeSpecName: "kube-api-access-9l47c") pod "772ef692-515e-40a3-b0c7-f3f78e3620c1" (UID: "772ef692-515e-40a3-b0c7-f3f78e3620c1"). InnerVolumeSpecName "kube-api-access-9l47c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.176859 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "772ef692-515e-40a3-b0c7-f3f78e3620c1" (UID: "772ef692-515e-40a3-b0c7-f3f78e3620c1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.199368 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "772ef692-515e-40a3-b0c7-f3f78e3620c1" (UID: "772ef692-515e-40a3-b0c7-f3f78e3620c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.205263 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "772ef692-515e-40a3-b0c7-f3f78e3620c1" (UID: "772ef692-515e-40a3-b0c7-f3f78e3620c1"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.211204 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory" (OuterVolumeSpecName: "inventory") pod "772ef692-515e-40a3-b0c7-f3f78e3620c1" (UID: "772ef692-515e-40a3-b0c7-f3f78e3620c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.278167 4762 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.278205 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.278247 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l47c\" (UniqueName: \"kubernetes.io/projected/772ef692-515e-40a3-b0c7-f3f78e3620c1-kube-api-access-9l47c\") on node \"crc\" DevicePath \"\"" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.278265 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.278280 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/772ef692-515e-40a3-b0c7-f3f78e3620c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.479842 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" event={"ID":"772ef692-515e-40a3-b0c7-f3f78e3620c1","Type":"ContainerDied","Data":"226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb"} Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.479876 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="226b2760138d9e80fa79a78464d94acf7cb2016650efe87b0fec56c68cf4a8fb" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.479925 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639433 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc"] Mar 08 00:59:11 crc kubenswrapper[4762]: E0308 00:59:11.639826 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="extract-content" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639841 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="extract-content" Mar 08 00:59:11 crc kubenswrapper[4762]: E0308 00:59:11.639850 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772ef692-515e-40a3-b0c7-f3f78e3620c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639856 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="772ef692-515e-40a3-b0c7-f3f78e3620c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 00:59:11 crc kubenswrapper[4762]: E0308 00:59:11.639873 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="extract-utilities" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639880 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="extract-utilities" Mar 08 00:59:11 crc kubenswrapper[4762]: E0308 00:59:11.639890 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639896 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" Mar 08 00:59:11 crc kubenswrapper[4762]: E0308 00:59:11.639904 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbf0114e-ac57-40f9-b3d7-219e71955cab" containerName="oc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.639909 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbf0114e-ac57-40f9-b3d7-219e71955cab" containerName="oc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.640106 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="772ef692-515e-40a3-b0c7-f3f78e3620c1" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.640116 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="34b41a4f-15ba-4f85-9a0d-96c077935d00" containerName="registry-server" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.640128 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbf0114e-ac57-40f9-b3d7-219e71955cab" containerName="oc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.640866 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.644407 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.644534 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.644592 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.645531 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.645881 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.660431 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc"] Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.786785 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.787941 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.787979 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwm2\" (UniqueName: \"kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.788057 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.788498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.891329 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.891468 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.891930 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.891969 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.892002 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwm2\" (UniqueName: \"kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.896279 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.896561 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.898381 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.905275 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.925997 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwm2\" (UniqueName: \"kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:11 crc kubenswrapper[4762]: I0308 00:59:11.961432 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 00:59:12 crc kubenswrapper[4762]: I0308 00:59:12.606090 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc"] Mar 08 00:59:12 crc kubenswrapper[4762]: W0308 00:59:12.615559 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod068a0247_3a6f_4505_9574_deba254e56f0.slice/crio-854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e WatchSource:0}: Error finding container 854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e: Status 404 returned error can't find the container with id 854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e Mar 08 00:59:12 crc kubenswrapper[4762]: I0308 00:59:12.619126 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 00:59:13 crc kubenswrapper[4762]: I0308 00:59:13.501573 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" event={"ID":"068a0247-3a6f-4505-9574-deba254e56f0","Type":"ContainerStarted","Data":"9445e6731b8cc64cd2ee5717b3a60d317c440fadd974f0afb285575a41aa6852"} Mar 08 00:59:13 crc kubenswrapper[4762]: I0308 00:59:13.501971 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" event={"ID":"068a0247-3a6f-4505-9574-deba254e56f0","Type":"ContainerStarted","Data":"854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e"} Mar 08 00:59:13 crc kubenswrapper[4762]: I0308 00:59:13.523091 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" podStartSLOduration=1.995091852 podStartE2EDuration="2.523067195s" podCreationTimestamp="2026-03-08 00:59:11 +0000 UTC" firstStartedPulling="2026-03-08 00:59:12.61891516 +0000 UTC m=+2174.093059504" lastFinishedPulling="2026-03-08 00:59:13.146890463 +0000 UTC m=+2174.621034847" observedRunningTime="2026-03-08 00:59:13.520127145 +0000 UTC m=+2174.994271519" watchObservedRunningTime="2026-03-08 00:59:13.523067195 +0000 UTC m=+2174.997211549" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.436717 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.442213 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.469720 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.542209 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.542422 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.542468 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kq4sh\" (UniqueName: \"kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.643951 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.644008 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kq4sh\" (UniqueName: \"kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.644090 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.644663 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.645346 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.675568 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kq4sh\" (UniqueName: \"kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh\") pod \"certified-operators-fb7xl\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:47 crc kubenswrapper[4762]: I0308 00:59:47.762011 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:48 crc kubenswrapper[4762]: I0308 00:59:48.265539 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 00:59:48 crc kubenswrapper[4762]: I0308 00:59:48.940417 4762 generic.go:334] "Generic (PLEG): container finished" podID="8704be29-895d-49c1-b797-902464261640" containerID="3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3" exitCode=0 Mar 08 00:59:48 crc kubenswrapper[4762]: I0308 00:59:48.940544 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerDied","Data":"3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3"} Mar 08 00:59:48 crc kubenswrapper[4762]: I0308 00:59:48.940920 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerStarted","Data":"ef58a60647f2d0751788f505c5ccd950be22ae5135fde546235bb629f562e454"} Mar 08 00:59:49 crc kubenswrapper[4762]: I0308 00:59:49.955941 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerStarted","Data":"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b"} Mar 08 00:59:51 crc kubenswrapper[4762]: I0308 00:59:51.990409 4762 generic.go:334] "Generic (PLEG): container finished" podID="8704be29-895d-49c1-b797-902464261640" containerID="b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b" exitCode=0 Mar 08 00:59:51 crc kubenswrapper[4762]: I0308 00:59:51.990549 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerDied","Data":"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b"} Mar 08 00:59:53 crc kubenswrapper[4762]: I0308 00:59:53.007651 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerStarted","Data":"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d"} Mar 08 00:59:53 crc kubenswrapper[4762]: I0308 00:59:53.044632 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fb7xl" podStartSLOduration=2.5525339750000002 podStartE2EDuration="6.044606814s" podCreationTimestamp="2026-03-08 00:59:47 +0000 UTC" firstStartedPulling="2026-03-08 00:59:48.942346009 +0000 UTC m=+2210.416490363" lastFinishedPulling="2026-03-08 00:59:52.434418828 +0000 UTC m=+2213.908563202" observedRunningTime="2026-03-08 00:59:53.0277237 +0000 UTC m=+2214.501868084" watchObservedRunningTime="2026-03-08 00:59:53.044606814 +0000 UTC m=+2214.518751188" Mar 08 00:59:57 crc kubenswrapper[4762]: I0308 00:59:57.762370 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:57 crc kubenswrapper[4762]: I0308 00:59:57.763202 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:57 crc kubenswrapper[4762]: I0308 00:59:57.840007 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:58 crc kubenswrapper[4762]: I0308 00:59:58.173503 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 00:59:58 crc kubenswrapper[4762]: I0308 00:59:58.238682 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.126800 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fb7xl" podUID="8704be29-895d-49c1-b797-902464261640" containerName="registry-server" containerID="cri-o://784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d" gracePeriod=2 Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.191217 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn"] Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.194973 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.199399 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.199863 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.206955 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn"] Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.269854 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdg2h\" (UniqueName: \"kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.269919 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.270105 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.279793 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548860-tmc6j"] Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.281860 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.286155 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.286439 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.286634 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.299788 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548860-tmc6j"] Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.371858 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdg2h\" (UniqueName: \"kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.371921 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.371974 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-884cc\" (UniqueName: \"kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc\") pod \"auto-csr-approver-29548860-tmc6j\" (UID: \"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0\") " pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.372080 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.373265 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.398466 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.403352 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdg2h\" (UniqueName: \"kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h\") pod \"collect-profiles-29548860-qnbsn\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.473985 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-884cc\" (UniqueName: \"kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc\") pod \"auto-csr-approver-29548860-tmc6j\" (UID: \"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0\") " pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.496129 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.500129 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-884cc\" (UniqueName: \"kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc\") pod \"auto-csr-approver-29548860-tmc6j\" (UID: \"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0\") " pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.589574 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.635131 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.677521 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kq4sh\" (UniqueName: \"kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh\") pod \"8704be29-895d-49c1-b797-902464261640\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.677570 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities\") pod \"8704be29-895d-49c1-b797-902464261640\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.678033 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content\") pod \"8704be29-895d-49c1-b797-902464261640\" (UID: \"8704be29-895d-49c1-b797-902464261640\") " Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.678911 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities" (OuterVolumeSpecName: "utilities") pod "8704be29-895d-49c1-b797-902464261640" (UID: "8704be29-895d-49c1-b797-902464261640"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.682506 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh" (OuterVolumeSpecName: "kube-api-access-kq4sh") pod "8704be29-895d-49c1-b797-902464261640" (UID: "8704be29-895d-49c1-b797-902464261640"). InnerVolumeSpecName "kube-api-access-kq4sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.780516 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kq4sh\" (UniqueName: \"kubernetes.io/projected/8704be29-895d-49c1-b797-902464261640-kube-api-access-kq4sh\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.780876 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:00 crc kubenswrapper[4762]: I0308 01:00:00.925507 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.042743 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548860-tmc6j"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.050488 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8704be29-895d-49c1-b797-902464261640" (UID: "8704be29-895d-49c1-b797-902464261640"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.067969 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-55bpv"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.082548 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-55bpv"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.088660 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8704be29-895d-49c1-b797-902464261640-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.140155 4762 generic.go:334] "Generic (PLEG): container finished" podID="8704be29-895d-49c1-b797-902464261640" containerID="784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d" exitCode=0 Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.140337 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerDied","Data":"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d"} Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.140399 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fb7xl" event={"ID":"8704be29-895d-49c1-b797-902464261640","Type":"ContainerDied","Data":"ef58a60647f2d0751788f505c5ccd950be22ae5135fde546235bb629f562e454"} Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.140431 4762 scope.go:117] "RemoveContainer" containerID="784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.141381 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fb7xl" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.141451 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" event={"ID":"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0","Type":"ContainerStarted","Data":"671512996f3cf9a386d65d4eb8199d19529ccd62f0ddb58740fc752221828b74"} Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.142847 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" event={"ID":"60a0152d-ae49-4d71-bf70-87f040f34a1c","Type":"ContainerStarted","Data":"db89ef5eba2806a89875a4388efd02b7b520421647881137b78da1dd89f2020c"} Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.142882 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" event={"ID":"60a0152d-ae49-4d71-bf70-87f040f34a1c","Type":"ContainerStarted","Data":"089f352a1e73d4288a67d94f9a4cb053abc493af4f77234cabe5a29e0e33fbea"} Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.165992 4762 scope.go:117] "RemoveContainer" containerID="b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.169554 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" podStartSLOduration=1.169529293 podStartE2EDuration="1.169529293s" podCreationTimestamp="2026-03-08 01:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:00:01.157068983 +0000 UTC m=+2222.631213337" watchObservedRunningTime="2026-03-08 01:00:01.169529293 +0000 UTC m=+2222.643673637" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.189381 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.195873 4762 scope.go:117] "RemoveContainer" containerID="3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.197442 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fb7xl"] Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.214318 4762 scope.go:117] "RemoveContainer" containerID="784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d" Mar 08 01:00:01 crc kubenswrapper[4762]: E0308 01:00:01.214821 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d\": container with ID starting with 784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d not found: ID does not exist" containerID="784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.214856 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d"} err="failed to get container status \"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d\": rpc error: code = NotFound desc = could not find container \"784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d\": container with ID starting with 784298bb1ac6b9b5a57fe62e8be67f7ce8543bf9750b49c428a738f75690915d not found: ID does not exist" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.214878 4762 scope.go:117] "RemoveContainer" containerID="b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b" Mar 08 01:00:01 crc kubenswrapper[4762]: E0308 01:00:01.215172 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b\": container with ID starting with b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b not found: ID does not exist" containerID="b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.215203 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b"} err="failed to get container status \"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b\": rpc error: code = NotFound desc = could not find container \"b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b\": container with ID starting with b0b09d413e5c78017ae82733cfe62aa65be353ce6da7d79fbad3a41371c8a16b not found: ID does not exist" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.215230 4762 scope.go:117] "RemoveContainer" containerID="3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3" Mar 08 01:00:01 crc kubenswrapper[4762]: E0308 01:00:01.215545 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3\": container with ID starting with 3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3 not found: ID does not exist" containerID="3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.215563 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3"} err="failed to get container status \"3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3\": rpc error: code = NotFound desc = could not find container \"3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3\": container with ID starting with 3c4226da7357eb4432d8fbb0380fb88381bb7f850ef037ebb2818a12c3aea8c3 not found: ID does not exist" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.301518 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8704be29-895d-49c1-b797-902464261640" path="/var/lib/kubelet/pods/8704be29-895d-49c1-b797-902464261640/volumes" Mar 08 01:00:01 crc kubenswrapper[4762]: I0308 01:00:01.302364 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="972a5997-389c-467b-ae2f-bc678f076277" path="/var/lib/kubelet/pods/972a5997-389c-467b-ae2f-bc678f076277/volumes" Mar 08 01:00:02 crc kubenswrapper[4762]: I0308 01:00:02.155282 4762 generic.go:334] "Generic (PLEG): container finished" podID="60a0152d-ae49-4d71-bf70-87f040f34a1c" containerID="db89ef5eba2806a89875a4388efd02b7b520421647881137b78da1dd89f2020c" exitCode=0 Mar 08 01:00:02 crc kubenswrapper[4762]: I0308 01:00:02.155399 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" event={"ID":"60a0152d-ae49-4d71-bf70-87f040f34a1c","Type":"ContainerDied","Data":"db89ef5eba2806a89875a4388efd02b7b520421647881137b78da1dd89f2020c"} Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.648626 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.744961 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume\") pod \"60a0152d-ae49-4d71-bf70-87f040f34a1c\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.745379 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume\") pod \"60a0152d-ae49-4d71-bf70-87f040f34a1c\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.745750 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdg2h\" (UniqueName: \"kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h\") pod \"60a0152d-ae49-4d71-bf70-87f040f34a1c\" (UID: \"60a0152d-ae49-4d71-bf70-87f040f34a1c\") " Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.746159 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume" (OuterVolumeSpecName: "config-volume") pod "60a0152d-ae49-4d71-bf70-87f040f34a1c" (UID: "60a0152d-ae49-4d71-bf70-87f040f34a1c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.747511 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/60a0152d-ae49-4d71-bf70-87f040f34a1c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.761153 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h" (OuterVolumeSpecName: "kube-api-access-fdg2h") pod "60a0152d-ae49-4d71-bf70-87f040f34a1c" (UID: "60a0152d-ae49-4d71-bf70-87f040f34a1c"). InnerVolumeSpecName "kube-api-access-fdg2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.761385 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "60a0152d-ae49-4d71-bf70-87f040f34a1c" (UID: "60a0152d-ae49-4d71-bf70-87f040f34a1c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.849274 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/60a0152d-ae49-4d71-bf70-87f040f34a1c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:03 crc kubenswrapper[4762]: I0308 01:00:03.849318 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdg2h\" (UniqueName: \"kubernetes.io/projected/60a0152d-ae49-4d71-bf70-87f040f34a1c-kube-api-access-fdg2h\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:04 crc kubenswrapper[4762]: I0308 01:00:04.187322 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" event={"ID":"60a0152d-ae49-4d71-bf70-87f040f34a1c","Type":"ContainerDied","Data":"089f352a1e73d4288a67d94f9a4cb053abc493af4f77234cabe5a29e0e33fbea"} Mar 08 01:00:04 crc kubenswrapper[4762]: I0308 01:00:04.187373 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="089f352a1e73d4288a67d94f9a4cb053abc493af4f77234cabe5a29e0e33fbea" Mar 08 01:00:04 crc kubenswrapper[4762]: I0308 01:00:04.187434 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn" Mar 08 01:00:04 crc kubenswrapper[4762]: I0308 01:00:04.232958 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr"] Mar 08 01:00:04 crc kubenswrapper[4762]: I0308 01:00:04.244141 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548815-whwnr"] Mar 08 01:00:05 crc kubenswrapper[4762]: I0308 01:00:05.287741 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="681404ff-89eb-420d-b1e2-6769d4b51636" path="/var/lib/kubelet/pods/681404ff-89eb-420d-b1e2-6769d4b51636/volumes" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.096220 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:12 crc kubenswrapper[4762]: E0308 01:00:12.097407 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8704be29-895d-49c1-b797-902464261640" containerName="registry-server" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097424 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8704be29-895d-49c1-b797-902464261640" containerName="registry-server" Mar 08 01:00:12 crc kubenswrapper[4762]: E0308 01:00:12.097442 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60a0152d-ae49-4d71-bf70-87f040f34a1c" containerName="collect-profiles" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097450 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="60a0152d-ae49-4d71-bf70-87f040f34a1c" containerName="collect-profiles" Mar 08 01:00:12 crc kubenswrapper[4762]: E0308 01:00:12.097491 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8704be29-895d-49c1-b797-902464261640" containerName="extract-content" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097500 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8704be29-895d-49c1-b797-902464261640" containerName="extract-content" Mar 08 01:00:12 crc kubenswrapper[4762]: E0308 01:00:12.097511 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8704be29-895d-49c1-b797-902464261640" containerName="extract-utilities" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097519 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8704be29-895d-49c1-b797-902464261640" containerName="extract-utilities" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097850 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8704be29-895d-49c1-b797-902464261640" containerName="registry-server" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.097865 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="60a0152d-ae49-4d71-bf70-87f040f34a1c" containerName="collect-profiles" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.099776 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.108592 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.271280 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2555k\" (UniqueName: \"kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.271430 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.271500 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.374105 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.374210 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.375104 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.375120 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.376036 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2555k\" (UniqueName: \"kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.395955 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2555k\" (UniqueName: \"kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k\") pod \"community-operators-8xphj\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.437036 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.851532 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.851830 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:00:12 crc kubenswrapper[4762]: W0308 01:00:12.974803 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9155b5a_a347_4985_bb1e_5e52bda63c38.slice/crio-d040d95c2077d3f83a13bbe27e8fb845d5b312ed1da9fdcfc52a777e44ffaf26 WatchSource:0}: Error finding container d040d95c2077d3f83a13bbe27e8fb845d5b312ed1da9fdcfc52a777e44ffaf26: Status 404 returned error can't find the container with id d040d95c2077d3f83a13bbe27e8fb845d5b312ed1da9fdcfc52a777e44ffaf26 Mar 08 01:00:12 crc kubenswrapper[4762]: I0308 01:00:12.975403 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.094834 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.100325 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.117549 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.297094 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.297471 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnwb5\" (UniqueName: \"kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.297528 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.310971 4762 generic.go:334] "Generic (PLEG): container finished" podID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerID="69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd" exitCode=0 Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.311027 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerDied","Data":"69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd"} Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.311057 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerStarted","Data":"d040d95c2077d3f83a13bbe27e8fb845d5b312ed1da9fdcfc52a777e44ffaf26"} Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.400185 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.400623 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnwb5\" (UniqueName: \"kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.400680 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.401125 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.401569 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.423424 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnwb5\" (UniqueName: \"kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5\") pod \"redhat-marketplace-vnl9l\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.440689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:13 crc kubenswrapper[4762]: I0308 01:00:13.997090 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:14 crc kubenswrapper[4762]: I0308 01:00:14.322836 4762 generic.go:334] "Generic (PLEG): container finished" podID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerID="721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20" exitCode=0 Mar 08 01:00:14 crc kubenswrapper[4762]: I0308 01:00:14.322893 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerDied","Data":"721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20"} Mar 08 01:00:14 crc kubenswrapper[4762]: I0308 01:00:14.322930 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerStarted","Data":"7f67abe4624cb9869f84d514074b0dbf1012ce214180e42085c62c1da016fe5e"} Mar 08 01:00:19 crc kubenswrapper[4762]: I0308 01:00:19.420805 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerStarted","Data":"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3"} Mar 08 01:00:21 crc kubenswrapper[4762]: I0308 01:00:21.451479 4762 generic.go:334] "Generic (PLEG): container finished" podID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerID="da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3" exitCode=0 Mar 08 01:00:21 crc kubenswrapper[4762]: I0308 01:00:21.451523 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerDied","Data":"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3"} Mar 08 01:00:22 crc kubenswrapper[4762]: I0308 01:00:22.468317 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerStarted","Data":"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643"} Mar 08 01:00:22 crc kubenswrapper[4762]: I0308 01:00:22.503184 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8xphj" podStartSLOduration=1.89809098 podStartE2EDuration="10.503149745s" podCreationTimestamp="2026-03-08 01:00:12 +0000 UTC" firstStartedPulling="2026-03-08 01:00:13.312500233 +0000 UTC m=+2234.786644577" lastFinishedPulling="2026-03-08 01:00:21.917558988 +0000 UTC m=+2243.391703342" observedRunningTime="2026-03-08 01:00:22.498027818 +0000 UTC m=+2243.972172172" watchObservedRunningTime="2026-03-08 01:00:22.503149745 +0000 UTC m=+2243.977294119" Mar 08 01:00:23 crc kubenswrapper[4762]: I0308 01:00:23.482373 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerStarted","Data":"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877"} Mar 08 01:00:25 crc kubenswrapper[4762]: I0308 01:00:25.514494 4762 generic.go:334] "Generic (PLEG): container finished" podID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerID="db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877" exitCode=0 Mar 08 01:00:25 crc kubenswrapper[4762]: I0308 01:00:25.514947 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerDied","Data":"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877"} Mar 08 01:00:26 crc kubenswrapper[4762]: I0308 01:00:26.531225 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerStarted","Data":"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5"} Mar 08 01:00:26 crc kubenswrapper[4762]: I0308 01:00:26.567299 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vnl9l" podStartSLOduration=1.936963835 podStartE2EDuration="13.567272549s" podCreationTimestamp="2026-03-08 01:00:13 +0000 UTC" firstStartedPulling="2026-03-08 01:00:14.327383788 +0000 UTC m=+2235.801528132" lastFinishedPulling="2026-03-08 01:00:25.957692472 +0000 UTC m=+2247.431836846" observedRunningTime="2026-03-08 01:00:26.556153591 +0000 UTC m=+2248.030297975" watchObservedRunningTime="2026-03-08 01:00:26.567272549 +0000 UTC m=+2248.041416923" Mar 08 01:00:28 crc kubenswrapper[4762]: I0308 01:00:28.588043 4762 scope.go:117] "RemoveContainer" containerID="68450cd8d95dabcb83a0e4ccc06c26f9b1893e25d494127354a4137a0519de2c" Mar 08 01:00:28 crc kubenswrapper[4762]: I0308 01:00:28.624121 4762 scope.go:117] "RemoveContainer" containerID="59c812da05d96c3fd40e2ce82e7659bdf249330efc6613b4b9dbac4ffcd05094" Mar 08 01:00:32 crc kubenswrapper[4762]: I0308 01:00:32.438855 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:32 crc kubenswrapper[4762]: I0308 01:00:32.439584 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:32 crc kubenswrapper[4762]: I0308 01:00:32.533089 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:32 crc kubenswrapper[4762]: I0308 01:00:32.696554 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:32 crc kubenswrapper[4762]: I0308 01:00:32.783266 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:33 crc kubenswrapper[4762]: I0308 01:00:33.441202 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:33 crc kubenswrapper[4762]: I0308 01:00:33.441257 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:33 crc kubenswrapper[4762]: I0308 01:00:33.528991 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:33 crc kubenswrapper[4762]: I0308 01:00:33.715965 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:34 crc kubenswrapper[4762]: I0308 01:00:34.642935 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8xphj" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="registry-server" containerID="cri-o://ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643" gracePeriod=2 Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.063089 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-tl48n"] Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.073601 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-tl48n"] Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.189082 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.230404 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.279276 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a277fe04-ceb8-40ca-aa94-8c1f440cf7c9" path="/var/lib/kubelet/pods/a277fe04-ceb8-40ca-aa94-8c1f440cf7c9/volumes" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.353628 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities\") pod \"b9155b5a-a347-4985-bb1e-5e52bda63c38\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.353721 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2555k\" (UniqueName: \"kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k\") pod \"b9155b5a-a347-4985-bb1e-5e52bda63c38\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.354009 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content\") pod \"b9155b5a-a347-4985-bb1e-5e52bda63c38\" (UID: \"b9155b5a-a347-4985-bb1e-5e52bda63c38\") " Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.354741 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities" (OuterVolumeSpecName: "utilities") pod "b9155b5a-a347-4985-bb1e-5e52bda63c38" (UID: "b9155b5a-a347-4985-bb1e-5e52bda63c38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.354930 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.362507 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k" (OuterVolumeSpecName: "kube-api-access-2555k") pod "b9155b5a-a347-4985-bb1e-5e52bda63c38" (UID: "b9155b5a-a347-4985-bb1e-5e52bda63c38"). InnerVolumeSpecName "kube-api-access-2555k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.438795 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9155b5a-a347-4985-bb1e-5e52bda63c38" (UID: "b9155b5a-a347-4985-bb1e-5e52bda63c38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.456813 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9155b5a-a347-4985-bb1e-5e52bda63c38-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.456835 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2555k\" (UniqueName: \"kubernetes.io/projected/b9155b5a-a347-4985-bb1e-5e52bda63c38-kube-api-access-2555k\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.654950 4762 generic.go:334] "Generic (PLEG): container finished" podID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerID="ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643" exitCode=0 Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.655062 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerDied","Data":"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643"} Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.655055 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8xphj" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.655131 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8xphj" event={"ID":"b9155b5a-a347-4985-bb1e-5e52bda63c38","Type":"ContainerDied","Data":"d040d95c2077d3f83a13bbe27e8fb845d5b312ed1da9fdcfc52a777e44ffaf26"} Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.655162 4762 scope.go:117] "RemoveContainer" containerID="ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.655787 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vnl9l" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="registry-server" containerID="cri-o://e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5" gracePeriod=2 Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.699951 4762 scope.go:117] "RemoveContainer" containerID="da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.711845 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.717370 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8xphj"] Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.760881 4762 scope.go:117] "RemoveContainer" containerID="69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.862835 4762 scope.go:117] "RemoveContainer" containerID="ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643" Mar 08 01:00:35 crc kubenswrapper[4762]: E0308 01:00:35.864695 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643\": container with ID starting with ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643 not found: ID does not exist" containerID="ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.864741 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643"} err="failed to get container status \"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643\": rpc error: code = NotFound desc = could not find container \"ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643\": container with ID starting with ba8feeb6a45cdbc93df381e1b77d676c02b964f31a56b06ddac1d1989a09c643 not found: ID does not exist" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.864791 4762 scope.go:117] "RemoveContainer" containerID="da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3" Mar 08 01:00:35 crc kubenswrapper[4762]: E0308 01:00:35.865177 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3\": container with ID starting with da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3 not found: ID does not exist" containerID="da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.865221 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3"} err="failed to get container status \"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3\": rpc error: code = NotFound desc = could not find container \"da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3\": container with ID starting with da890178f4551f6e025fb1306756f7b570c821947c8ec52aeb155e373647cdd3 not found: ID does not exist" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.865250 4762 scope.go:117] "RemoveContainer" containerID="69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd" Mar 08 01:00:35 crc kubenswrapper[4762]: E0308 01:00:35.865692 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd\": container with ID starting with 69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd not found: ID does not exist" containerID="69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd" Mar 08 01:00:35 crc kubenswrapper[4762]: I0308 01:00:35.865723 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd"} err="failed to get container status \"69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd\": rpc error: code = NotFound desc = could not find container \"69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd\": container with ID starting with 69bebec9386b940982e283619ca136ed9fb75fd6b063fe8c99f1fa53fc6eaabd not found: ID does not exist" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.225875 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.376364 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnwb5\" (UniqueName: \"kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5\") pod \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.376439 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content\") pod \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.376514 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities\") pod \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\" (UID: \"e56318e6-ae41-4d92-be18-5cecf4f47c6e\") " Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.378495 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities" (OuterVolumeSpecName: "utilities") pod "e56318e6-ae41-4d92-be18-5cecf4f47c6e" (UID: "e56318e6-ae41-4d92-be18-5cecf4f47c6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.393496 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5" (OuterVolumeSpecName: "kube-api-access-cnwb5") pod "e56318e6-ae41-4d92-be18-5cecf4f47c6e" (UID: "e56318e6-ae41-4d92-be18-5cecf4f47c6e"). InnerVolumeSpecName "kube-api-access-cnwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.405492 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e56318e6-ae41-4d92-be18-5cecf4f47c6e" (UID: "e56318e6-ae41-4d92-be18-5cecf4f47c6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.479456 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnwb5\" (UniqueName: \"kubernetes.io/projected/e56318e6-ae41-4d92-be18-5cecf4f47c6e-kube-api-access-cnwb5\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.479500 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.479513 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e56318e6-ae41-4d92-be18-5cecf4f47c6e-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.669367 4762 generic.go:334] "Generic (PLEG): container finished" podID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerID="e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5" exitCode=0 Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.669578 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerDied","Data":"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5"} Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.669668 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vnl9l" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.669740 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vnl9l" event={"ID":"e56318e6-ae41-4d92-be18-5cecf4f47c6e","Type":"ContainerDied","Data":"7f67abe4624cb9869f84d514074b0dbf1012ce214180e42085c62c1da016fe5e"} Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.669800 4762 scope.go:117] "RemoveContainer" containerID="e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.707835 4762 scope.go:117] "RemoveContainer" containerID="db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.713219 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.729145 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vnl9l"] Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.742930 4762 scope.go:117] "RemoveContainer" containerID="721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.764738 4762 scope.go:117] "RemoveContainer" containerID="e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5" Mar 08 01:00:36 crc kubenswrapper[4762]: E0308 01:00:36.765321 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5\": container with ID starting with e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5 not found: ID does not exist" containerID="e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.765388 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5"} err="failed to get container status \"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5\": rpc error: code = NotFound desc = could not find container \"e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5\": container with ID starting with e8571ac478c87bcb0b82eb7573e847dad0b644b39f37d237aba6b140a2ceece5 not found: ID does not exist" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.765431 4762 scope.go:117] "RemoveContainer" containerID="db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877" Mar 08 01:00:36 crc kubenswrapper[4762]: E0308 01:00:36.766148 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877\": container with ID starting with db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877 not found: ID does not exist" containerID="db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.766211 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877"} err="failed to get container status \"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877\": rpc error: code = NotFound desc = could not find container \"db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877\": container with ID starting with db5d3b2f02d3830de4f042e67af664fa0d8a922dc8f156061e31e8b3d52c0877 not found: ID does not exist" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.766256 4762 scope.go:117] "RemoveContainer" containerID="721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20" Mar 08 01:00:36 crc kubenswrapper[4762]: E0308 01:00:36.766829 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20\": container with ID starting with 721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20 not found: ID does not exist" containerID="721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20" Mar 08 01:00:36 crc kubenswrapper[4762]: I0308 01:00:36.766876 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20"} err="failed to get container status \"721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20\": rpc error: code = NotFound desc = could not find container \"721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20\": container with ID starting with 721a22d3e8ad2685d95833a616955d8f4d6f64d5070d4f5377f0fa9981a6ff20 not found: ID does not exist" Mar 08 01:00:37 crc kubenswrapper[4762]: I0308 01:00:37.278735 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" path="/var/lib/kubelet/pods/b9155b5a-a347-4985-bb1e-5e52bda63c38/volumes" Mar 08 01:00:37 crc kubenswrapper[4762]: I0308 01:00:37.280105 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" path="/var/lib/kubelet/pods/e56318e6-ae41-4d92-be18-5cecf4f47c6e/volumes" Mar 08 01:00:42 crc kubenswrapper[4762]: I0308 01:00:42.851394 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:00:42 crc kubenswrapper[4762]: I0308 01:00:42.851816 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.165070 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29548861-bhbxf"] Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166522 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="extract-utilities" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166549 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="extract-utilities" Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166585 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="extract-utilities" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166599 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="extract-utilities" Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166619 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166633 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166653 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166666 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166697 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="extract-content" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166710 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="extract-content" Mar 08 01:01:00 crc kubenswrapper[4762]: E0308 01:01:00.166735 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="extract-content" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.166747 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="extract-content" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.167161 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9155b5a-a347-4985-bb1e-5e52bda63c38" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.167213 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e56318e6-ae41-4d92-be18-5cecf4f47c6e" containerName="registry-server" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.168469 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.177935 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29548861-bhbxf"] Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.310602 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47x44\" (UniqueName: \"kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.310952 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.311007 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.311175 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.414383 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.414543 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47x44\" (UniqueName: \"kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.414611 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.414782 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.427877 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.428203 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.429161 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.438165 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47x44\" (UniqueName: \"kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44\") pod \"keystone-cron-29548861-bhbxf\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:00 crc kubenswrapper[4762]: I0308 01:01:00.505936 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:01 crc kubenswrapper[4762]: I0308 01:01:01.000294 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29548861-bhbxf"] Mar 08 01:01:02 crc kubenswrapper[4762]: I0308 01:01:02.015656 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548861-bhbxf" event={"ID":"d3638a10-cb89-4a5e-bd32-db41c873db68","Type":"ContainerStarted","Data":"c41d1ff767d0c777908768bd84c28bd6459c6c3ba986428d1862495a86dafe86"} Mar 08 01:01:02 crc kubenswrapper[4762]: I0308 01:01:02.016004 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548861-bhbxf" event={"ID":"d3638a10-cb89-4a5e-bd32-db41c873db68","Type":"ContainerStarted","Data":"b92de4e02cb56332cd7fc9ff8a47a79c7e2e5891aa31e387d3c8a8ac2d58785c"} Mar 08 01:01:02 crc kubenswrapper[4762]: I0308 01:01:02.046151 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29548861-bhbxf" podStartSLOduration=2.046131056 podStartE2EDuration="2.046131056s" podCreationTimestamp="2026-03-08 01:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:01:02.044594859 +0000 UTC m=+2283.518739243" watchObservedRunningTime="2026-03-08 01:01:02.046131056 +0000 UTC m=+2283.520275410" Mar 08 01:01:02 crc kubenswrapper[4762]: E0308 01:01:02.765882 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 08 01:01:02 crc kubenswrapper[4762]: E0308 01:01:02.766248 4762 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 08 01:01:02 crc kubenswrapper[4762]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 08 01:01:02 crc kubenswrapper[4762]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-884cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29548860-tmc6j_openshift-infra(7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0): ErrImagePull: copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out Mar 08 01:01:02 crc kubenswrapper[4762]: > logger="UnhandledError" Mar 08 01:01:02 crc kubenswrapper[4762]: E0308 01:01:02.767420 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: parsing image configuration: fetching blob: received unexpected HTTP status: 504 Gateway Time-out\"" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" Mar 08 01:01:03 crc kubenswrapper[4762]: E0308 01:01:03.030531 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" Mar 08 01:01:04 crc kubenswrapper[4762]: I0308 01:01:04.042053 4762 generic.go:334] "Generic (PLEG): container finished" podID="d3638a10-cb89-4a5e-bd32-db41c873db68" containerID="c41d1ff767d0c777908768bd84c28bd6459c6c3ba986428d1862495a86dafe86" exitCode=0 Mar 08 01:01:04 crc kubenswrapper[4762]: I0308 01:01:04.042132 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548861-bhbxf" event={"ID":"d3638a10-cb89-4a5e-bd32-db41c873db68","Type":"ContainerDied","Data":"c41d1ff767d0c777908768bd84c28bd6459c6c3ba986428d1862495a86dafe86"} Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.507872 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.640823 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47x44\" (UniqueName: \"kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44\") pod \"d3638a10-cb89-4a5e-bd32-db41c873db68\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.640940 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys\") pod \"d3638a10-cb89-4a5e-bd32-db41c873db68\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.641184 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle\") pod \"d3638a10-cb89-4a5e-bd32-db41c873db68\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.641234 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data\") pod \"d3638a10-cb89-4a5e-bd32-db41c873db68\" (UID: \"d3638a10-cb89-4a5e-bd32-db41c873db68\") " Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.647383 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3638a10-cb89-4a5e-bd32-db41c873db68" (UID: "d3638a10-cb89-4a5e-bd32-db41c873db68"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.647478 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44" (OuterVolumeSpecName: "kube-api-access-47x44") pod "d3638a10-cb89-4a5e-bd32-db41c873db68" (UID: "d3638a10-cb89-4a5e-bd32-db41c873db68"). InnerVolumeSpecName "kube-api-access-47x44". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.669856 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3638a10-cb89-4a5e-bd32-db41c873db68" (UID: "d3638a10-cb89-4a5e-bd32-db41c873db68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.736641 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data" (OuterVolumeSpecName: "config-data") pod "d3638a10-cb89-4a5e-bd32-db41c873db68" (UID: "d3638a10-cb89-4a5e-bd32-db41c873db68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.743605 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47x44\" (UniqueName: \"kubernetes.io/projected/d3638a10-cb89-4a5e-bd32-db41c873db68-kube-api-access-47x44\") on node \"crc\" DevicePath \"\"" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.743646 4762 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.743663 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:01:05 crc kubenswrapper[4762]: I0308 01:01:05.743678 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3638a10-cb89-4a5e-bd32-db41c873db68-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:01:06 crc kubenswrapper[4762]: I0308 01:01:06.068497 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548861-bhbxf" event={"ID":"d3638a10-cb89-4a5e-bd32-db41c873db68","Type":"ContainerDied","Data":"b92de4e02cb56332cd7fc9ff8a47a79c7e2e5891aa31e387d3c8a8ac2d58785c"} Mar 08 01:01:06 crc kubenswrapper[4762]: I0308 01:01:06.068538 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b92de4e02cb56332cd7fc9ff8a47a79c7e2e5891aa31e387d3c8a8ac2d58785c" Mar 08 01:01:06 crc kubenswrapper[4762]: I0308 01:01:06.068582 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548861-bhbxf" Mar 08 01:01:12 crc kubenswrapper[4762]: I0308 01:01:12.851687 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:01:12 crc kubenswrapper[4762]: I0308 01:01:12.852188 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:01:12 crc kubenswrapper[4762]: I0308 01:01:12.852251 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:01:12 crc kubenswrapper[4762]: I0308 01:01:12.853357 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:01:12 crc kubenswrapper[4762]: I0308 01:01:12.853446 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add" gracePeriod=600 Mar 08 01:01:13 crc kubenswrapper[4762]: I0308 01:01:13.159837 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add" exitCode=0 Mar 08 01:01:13 crc kubenswrapper[4762]: I0308 01:01:13.160211 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add"} Mar 08 01:01:13 crc kubenswrapper[4762]: I0308 01:01:13.160253 4762 scope.go:117] "RemoveContainer" containerID="c0413f21e804ddc6a76d526b8a89d205224f7458c44d267ea85a6fe64eff14ac" Mar 08 01:01:14 crc kubenswrapper[4762]: I0308 01:01:14.173371 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a"} Mar 08 01:01:28 crc kubenswrapper[4762]: I0308 01:01:28.787541 4762 scope.go:117] "RemoveContainer" containerID="65dc75ac130668727c2741affea466494afd2139ee82688665d1d07f4658d61f" Mar 08 01:01:29 crc kubenswrapper[4762]: I0308 01:01:29.360441 4762 generic.go:334] "Generic (PLEG): container finished" podID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" containerID="ba9d95096de9ea1f95ccfb9d633d6d4d8806c5065428090ee1a626a1812acce0" exitCode=0 Mar 08 01:01:29 crc kubenswrapper[4762]: I0308 01:01:29.360540 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" event={"ID":"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0","Type":"ContainerDied","Data":"ba9d95096de9ea1f95ccfb9d633d6d4d8806c5065428090ee1a626a1812acce0"} Mar 08 01:01:30 crc kubenswrapper[4762]: I0308 01:01:30.911618 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.015939 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-884cc\" (UniqueName: \"kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc\") pod \"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0\" (UID: \"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0\") " Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.022602 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc" (OuterVolumeSpecName: "kube-api-access-884cc") pod "7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" (UID: "7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0"). InnerVolumeSpecName "kube-api-access-884cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.118581 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-884cc\" (UniqueName: \"kubernetes.io/projected/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0-kube-api-access-884cc\") on node \"crc\" DevicePath \"\"" Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.389045 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" event={"ID":"7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0","Type":"ContainerDied","Data":"671512996f3cf9a386d65d4eb8199d19529ccd62f0ddb58740fc752221828b74"} Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.389449 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671512996f3cf9a386d65d4eb8199d19529ccd62f0ddb58740fc752221828b74" Mar 08 01:01:31 crc kubenswrapper[4762]: I0308 01:01:31.389104 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548860-tmc6j" Mar 08 01:01:32 crc kubenswrapper[4762]: I0308 01:01:32.002406 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548854-g5tgg"] Mar 08 01:01:32 crc kubenswrapper[4762]: I0308 01:01:32.018322 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548854-g5tgg"] Mar 08 01:01:33 crc kubenswrapper[4762]: I0308 01:01:33.292421 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05c005e7-2d25-403a-a39b-d4833e076719" path="/var/lib/kubelet/pods/05c005e7-2d25-403a-a39b-d4833e076719/volumes" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.173028 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548862-qvgrl"] Mar 08 01:02:00 crc kubenswrapper[4762]: E0308 01:02:00.174697 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3638a10-cb89-4a5e-bd32-db41c873db68" containerName="keystone-cron" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.174726 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3638a10-cb89-4a5e-bd32-db41c873db68" containerName="keystone-cron" Mar 08 01:02:00 crc kubenswrapper[4762]: E0308 01:02:00.174795 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" containerName="oc" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.174811 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" containerName="oc" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.175304 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3638a10-cb89-4a5e-bd32-db41c873db68" containerName="keystone-cron" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.175357 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" containerName="oc" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.176906 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.179492 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.180718 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.182609 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.182820 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548862-qvgrl"] Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.293902 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfq7c\" (UniqueName: \"kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c\") pod \"auto-csr-approver-29548862-qvgrl\" (UID: \"e1fd34ad-ad62-40c8-a342-fcafb93271a1\") " pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.396494 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfq7c\" (UniqueName: \"kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c\") pod \"auto-csr-approver-29548862-qvgrl\" (UID: \"e1fd34ad-ad62-40c8-a342-fcafb93271a1\") " pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.425736 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfq7c\" (UniqueName: \"kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c\") pod \"auto-csr-approver-29548862-qvgrl\" (UID: \"e1fd34ad-ad62-40c8-a342-fcafb93271a1\") " pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:00 crc kubenswrapper[4762]: I0308 01:02:00.505473 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:01 crc kubenswrapper[4762]: I0308 01:02:01.023565 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548862-qvgrl"] Mar 08 01:02:01 crc kubenswrapper[4762]: I0308 01:02:01.815996 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" event={"ID":"e1fd34ad-ad62-40c8-a342-fcafb93271a1","Type":"ContainerStarted","Data":"97c3e9acc2fb30f755b1c88090bd60bc65b21858d649fbce19c6c51730d564a3"} Mar 08 01:02:02 crc kubenswrapper[4762]: I0308 01:02:02.830543 4762 generic.go:334] "Generic (PLEG): container finished" podID="e1fd34ad-ad62-40c8-a342-fcafb93271a1" containerID="0366d6628fe2b7aa9666be4ebf602bd7cc0bd39bb348ce4b163efb01b47ba4c7" exitCode=0 Mar 08 01:02:02 crc kubenswrapper[4762]: I0308 01:02:02.830612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" event={"ID":"e1fd34ad-ad62-40c8-a342-fcafb93271a1","Type":"ContainerDied","Data":"0366d6628fe2b7aa9666be4ebf602bd7cc0bd39bb348ce4b163efb01b47ba4c7"} Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.382380 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.394608 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfq7c\" (UniqueName: \"kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c\") pod \"e1fd34ad-ad62-40c8-a342-fcafb93271a1\" (UID: \"e1fd34ad-ad62-40c8-a342-fcafb93271a1\") " Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.405124 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c" (OuterVolumeSpecName: "kube-api-access-hfq7c") pod "e1fd34ad-ad62-40c8-a342-fcafb93271a1" (UID: "e1fd34ad-ad62-40c8-a342-fcafb93271a1"). InnerVolumeSpecName "kube-api-access-hfq7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.496994 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfq7c\" (UniqueName: \"kubernetes.io/projected/e1fd34ad-ad62-40c8-a342-fcafb93271a1-kube-api-access-hfq7c\") on node \"crc\" DevicePath \"\"" Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.860333 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" event={"ID":"e1fd34ad-ad62-40c8-a342-fcafb93271a1","Type":"ContainerDied","Data":"97c3e9acc2fb30f755b1c88090bd60bc65b21858d649fbce19c6c51730d564a3"} Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.860380 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97c3e9acc2fb30f755b1c88090bd60bc65b21858d649fbce19c6c51730d564a3" Mar 08 01:02:04 crc kubenswrapper[4762]: I0308 01:02:04.860379 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548862-qvgrl" Mar 08 01:02:05 crc kubenswrapper[4762]: I0308 01:02:05.483624 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548856-zpmvc"] Mar 08 01:02:05 crc kubenswrapper[4762]: I0308 01:02:05.495954 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548856-zpmvc"] Mar 08 01:02:07 crc kubenswrapper[4762]: I0308 01:02:07.285806 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c4248d-35d8-4ea7-9546-7665ea1c9f15" path="/var/lib/kubelet/pods/74c4248d-35d8-4ea7-9546-7665ea1c9f15/volumes" Mar 08 01:02:28 crc kubenswrapper[4762]: I0308 01:02:28.898238 4762 scope.go:117] "RemoveContainer" containerID="e12aae2ce657f7444a3694003a2e9b2d3a37179e3389a98f3c2af4ac99ce1730" Mar 08 01:02:28 crc kubenswrapper[4762]: I0308 01:02:28.953447 4762 scope.go:117] "RemoveContainer" containerID="38562e5240f1d9cbadc63e799b4ae7ca7f43909095489832b193c2143799b657" Mar 08 01:03:07 crc kubenswrapper[4762]: I0308 01:03:07.630961 4762 generic.go:334] "Generic (PLEG): container finished" podID="068a0247-3a6f-4505-9574-deba254e56f0" containerID="9445e6731b8cc64cd2ee5717b3a60d317c440fadd974f0afb285575a41aa6852" exitCode=0 Mar 08 01:03:07 crc kubenswrapper[4762]: I0308 01:03:07.631145 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" event={"ID":"068a0247-3a6f-4505-9574-deba254e56f0","Type":"ContainerDied","Data":"9445e6731b8cc64cd2ee5717b3a60d317c440fadd974f0afb285575a41aa6852"} Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.213454 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.396231 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkwm2\" (UniqueName: \"kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2\") pod \"068a0247-3a6f-4505-9574-deba254e56f0\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.396338 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory\") pod \"068a0247-3a6f-4505-9574-deba254e56f0\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.396459 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle\") pod \"068a0247-3a6f-4505-9574-deba254e56f0\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.396515 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam\") pod \"068a0247-3a6f-4505-9574-deba254e56f0\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.396566 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0\") pod \"068a0247-3a6f-4505-9574-deba254e56f0\" (UID: \"068a0247-3a6f-4505-9574-deba254e56f0\") " Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.406354 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2" (OuterVolumeSpecName: "kube-api-access-lkwm2") pod "068a0247-3a6f-4505-9574-deba254e56f0" (UID: "068a0247-3a6f-4505-9574-deba254e56f0"). InnerVolumeSpecName "kube-api-access-lkwm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.406583 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "068a0247-3a6f-4505-9574-deba254e56f0" (UID: "068a0247-3a6f-4505-9574-deba254e56f0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.442244 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "068a0247-3a6f-4505-9574-deba254e56f0" (UID: "068a0247-3a6f-4505-9574-deba254e56f0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.458303 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory" (OuterVolumeSpecName: "inventory") pod "068a0247-3a6f-4505-9574-deba254e56f0" (UID: "068a0247-3a6f-4505-9574-deba254e56f0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.469600 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "068a0247-3a6f-4505-9574-deba254e56f0" (UID: "068a0247-3a6f-4505-9574-deba254e56f0"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.499833 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.499871 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.499887 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.499899 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkwm2\" (UniqueName: \"kubernetes.io/projected/068a0247-3a6f-4505-9574-deba254e56f0-kube-api-access-lkwm2\") on node \"crc\" DevicePath \"\"" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.499912 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/068a0247-3a6f-4505-9574-deba254e56f0-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.656288 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" event={"ID":"068a0247-3a6f-4505-9574-deba254e56f0","Type":"ContainerDied","Data":"854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e"} Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.656334 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="854da315d7f6aad365fd5fbaa15b5b162cb429157046b8c50130cf792c3e475e" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.656399 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.791522 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr"] Mar 08 01:03:09 crc kubenswrapper[4762]: E0308 01:03:09.792115 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="068a0247-3a6f-4505-9574-deba254e56f0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.792137 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="068a0247-3a6f-4505-9574-deba254e56f0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:03:09 crc kubenswrapper[4762]: E0308 01:03:09.792153 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1fd34ad-ad62-40c8-a342-fcafb93271a1" containerName="oc" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.792161 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1fd34ad-ad62-40c8-a342-fcafb93271a1" containerName="oc" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.792403 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1fd34ad-ad62-40c8-a342-fcafb93271a1" containerName="oc" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.792427 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="068a0247-3a6f-4505-9574-deba254e56f0" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.793487 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.801651 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.802091 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.802615 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.802622 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.805998 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.808849 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.808913 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.808956 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzl4g\" (UniqueName: \"kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.809028 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.809125 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.809214 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.809603 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.823864 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr"] Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910430 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910532 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910555 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910575 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzl4g\" (UniqueName: \"kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910603 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910646 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.910688 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.915449 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.915709 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.916270 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.916654 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.917115 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.920285 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:09 crc kubenswrapper[4762]: I0308 01:03:09.931588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzl4g\" (UniqueName: \"kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-27hkr\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:10 crc kubenswrapper[4762]: I0308 01:03:10.127647 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:03:10 crc kubenswrapper[4762]: I0308 01:03:10.749937 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr"] Mar 08 01:03:11 crc kubenswrapper[4762]: I0308 01:03:11.686237 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" event={"ID":"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853","Type":"ContainerStarted","Data":"81780de73bae7feda2ce1e462ac378b4934bb5d62ed0e2691e16306908e46cb2"} Mar 08 01:03:11 crc kubenswrapper[4762]: I0308 01:03:11.686830 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" event={"ID":"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853","Type":"ContainerStarted","Data":"476a83c7594c46be75296c7a14f559a46c627d7c20099a02237ac898f6a72137"} Mar 08 01:03:11 crc kubenswrapper[4762]: I0308 01:03:11.725559 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" podStartSLOduration=2.187615057 podStartE2EDuration="2.725531163s" podCreationTimestamp="2026-03-08 01:03:09 +0000 UTC" firstStartedPulling="2026-03-08 01:03:10.773651645 +0000 UTC m=+2412.247795989" lastFinishedPulling="2026-03-08 01:03:11.311567711 +0000 UTC m=+2412.785712095" observedRunningTime="2026-03-08 01:03:11.705921476 +0000 UTC m=+2413.180065840" watchObservedRunningTime="2026-03-08 01:03:11.725531163 +0000 UTC m=+2413.199675537" Mar 08 01:03:42 crc kubenswrapper[4762]: I0308 01:03:42.851656 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:03:42 crc kubenswrapper[4762]: I0308 01:03:42.852262 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.185684 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548864-rtrcn"] Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.188159 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.191864 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.192222 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.196643 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.201877 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548864-rtrcn"] Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.309238 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznbz\" (UniqueName: \"kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz\") pod \"auto-csr-approver-29548864-rtrcn\" (UID: \"12bd746d-51ab-49e8-937b-8df6b580b687\") " pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.411628 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznbz\" (UniqueName: \"kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz\") pod \"auto-csr-approver-29548864-rtrcn\" (UID: \"12bd746d-51ab-49e8-937b-8df6b580b687\") " pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.450350 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznbz\" (UniqueName: \"kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz\") pod \"auto-csr-approver-29548864-rtrcn\" (UID: \"12bd746d-51ab-49e8-937b-8df6b580b687\") " pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:00 crc kubenswrapper[4762]: I0308 01:04:00.532078 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:01 crc kubenswrapper[4762]: I0308 01:04:01.077892 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548864-rtrcn"] Mar 08 01:04:01 crc kubenswrapper[4762]: I0308 01:04:01.347455 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" event={"ID":"12bd746d-51ab-49e8-937b-8df6b580b687","Type":"ContainerStarted","Data":"f3326839384014df78e5f8065c13b51dab87dfc55622c495b094552f1e43fe59"} Mar 08 01:04:02 crc kubenswrapper[4762]: I0308 01:04:02.361517 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" event={"ID":"12bd746d-51ab-49e8-937b-8df6b580b687","Type":"ContainerStarted","Data":"cfb0f5881196a1e4abd21a2e0a78db740f6235dae0c3e27c50deb9f382dfe92f"} Mar 08 01:04:02 crc kubenswrapper[4762]: I0308 01:04:02.380244 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" podStartSLOduration=1.508660766 podStartE2EDuration="2.380223518s" podCreationTimestamp="2026-03-08 01:04:00 +0000 UTC" firstStartedPulling="2026-03-08 01:04:01.090390922 +0000 UTC m=+2462.564535256" lastFinishedPulling="2026-03-08 01:04:01.961953654 +0000 UTC m=+2463.436098008" observedRunningTime="2026-03-08 01:04:02.374643198 +0000 UTC m=+2463.848787552" watchObservedRunningTime="2026-03-08 01:04:02.380223518 +0000 UTC m=+2463.854367862" Mar 08 01:04:03 crc kubenswrapper[4762]: I0308 01:04:03.377828 4762 generic.go:334] "Generic (PLEG): container finished" podID="12bd746d-51ab-49e8-937b-8df6b580b687" containerID="cfb0f5881196a1e4abd21a2e0a78db740f6235dae0c3e27c50deb9f382dfe92f" exitCode=0 Mar 08 01:04:03 crc kubenswrapper[4762]: I0308 01:04:03.377943 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" event={"ID":"12bd746d-51ab-49e8-937b-8df6b580b687","Type":"ContainerDied","Data":"cfb0f5881196a1e4abd21a2e0a78db740f6235dae0c3e27c50deb9f382dfe92f"} Mar 08 01:04:04 crc kubenswrapper[4762]: I0308 01:04:04.879046 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.037612 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sznbz\" (UniqueName: \"kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz\") pod \"12bd746d-51ab-49e8-937b-8df6b580b687\" (UID: \"12bd746d-51ab-49e8-937b-8df6b580b687\") " Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.045484 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz" (OuterVolumeSpecName: "kube-api-access-sznbz") pod "12bd746d-51ab-49e8-937b-8df6b580b687" (UID: "12bd746d-51ab-49e8-937b-8df6b580b687"). InnerVolumeSpecName "kube-api-access-sznbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.140553 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sznbz\" (UniqueName: \"kubernetes.io/projected/12bd746d-51ab-49e8-937b-8df6b580b687-kube-api-access-sznbz\") on node \"crc\" DevicePath \"\"" Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.403202 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" event={"ID":"12bd746d-51ab-49e8-937b-8df6b580b687","Type":"ContainerDied","Data":"f3326839384014df78e5f8065c13b51dab87dfc55622c495b094552f1e43fe59"} Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.403446 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3326839384014df78e5f8065c13b51dab87dfc55622c495b094552f1e43fe59" Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.403241 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548864-rtrcn" Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.488641 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548858-zgd89"] Mar 08 01:04:05 crc kubenswrapper[4762]: I0308 01:04:05.499894 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548858-zgd89"] Mar 08 01:04:07 crc kubenswrapper[4762]: I0308 01:04:07.294038 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbf0114e-ac57-40f9-b3d7-219e71955cab" path="/var/lib/kubelet/pods/fbf0114e-ac57-40f9-b3d7-219e71955cab/volumes" Mar 08 01:04:12 crc kubenswrapper[4762]: I0308 01:04:12.852090 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:04:12 crc kubenswrapper[4762]: I0308 01:04:12.853024 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:04:29 crc kubenswrapper[4762]: I0308 01:04:29.127040 4762 scope.go:117] "RemoveContainer" containerID="6950c9cce05d665407a8c8c606f58c8ee05dcb74bfc732e0db4749be1f0daf41" Mar 08 01:04:42 crc kubenswrapper[4762]: I0308 01:04:42.851585 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:04:42 crc kubenswrapper[4762]: I0308 01:04:42.852252 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:04:42 crc kubenswrapper[4762]: I0308 01:04:42.852313 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:04:42 crc kubenswrapper[4762]: I0308 01:04:42.853252 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:04:42 crc kubenswrapper[4762]: I0308 01:04:42.853346 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" gracePeriod=600 Mar 08 01:04:42 crc kubenswrapper[4762]: E0308 01:04:42.984251 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:04:43 crc kubenswrapper[4762]: I0308 01:04:43.850164 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" exitCode=0 Mar 08 01:04:43 crc kubenswrapper[4762]: I0308 01:04:43.850271 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a"} Mar 08 01:04:43 crc kubenswrapper[4762]: I0308 01:04:43.850889 4762 scope.go:117] "RemoveContainer" containerID="20f03e1e6f0a5e71fdfaf64291cd12e01a1c6707b014e6b5cbd7f13c2f3c7add" Mar 08 01:04:43 crc kubenswrapper[4762]: I0308 01:04:43.851860 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:04:43 crc kubenswrapper[4762]: E0308 01:04:43.852365 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:04:58 crc kubenswrapper[4762]: I0308 01:04:58.263230 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:04:58 crc kubenswrapper[4762]: E0308 01:04:58.265277 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:05:11 crc kubenswrapper[4762]: I0308 01:05:11.266576 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:05:11 crc kubenswrapper[4762]: E0308 01:05:11.267680 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:05:26 crc kubenswrapper[4762]: I0308 01:05:26.264020 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:05:26 crc kubenswrapper[4762]: E0308 01:05:26.265675 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:05:28 crc kubenswrapper[4762]: I0308 01:05:27.831183 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output=< Mar 08 01:05:28 crc kubenswrapper[4762]: timeout: health rpc did not complete within 1s Mar 08 01:05:28 crc kubenswrapper[4762]: > Mar 08 01:05:37 crc kubenswrapper[4762]: I0308 01:05:37.264174 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:05:37 crc kubenswrapper[4762]: E0308 01:05:37.265500 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:05:51 crc kubenswrapper[4762]: I0308 01:05:51.263558 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:05:51 crc kubenswrapper[4762]: E0308 01:05:51.264426 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.175979 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548866-qzcd8"] Mar 08 01:06:00 crc kubenswrapper[4762]: E0308 01:06:00.177582 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12bd746d-51ab-49e8-937b-8df6b580b687" containerName="oc" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.177609 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="12bd746d-51ab-49e8-937b-8df6b580b687" containerName="oc" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.178161 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="12bd746d-51ab-49e8-937b-8df6b580b687" containerName="oc" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.179513 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.183129 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.183483 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.183824 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.191398 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548866-qzcd8"] Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.308424 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8\") pod \"auto-csr-approver-29548866-qzcd8\" (UID: \"b5bcb215-1281-464a-aa1f-28099e754a1f\") " pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.411564 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8\") pod \"auto-csr-approver-29548866-qzcd8\" (UID: \"b5bcb215-1281-464a-aa1f-28099e754a1f\") " pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.441138 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8\") pod \"auto-csr-approver-29548866-qzcd8\" (UID: \"b5bcb215-1281-464a-aa1f-28099e754a1f\") " pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:00 crc kubenswrapper[4762]: I0308 01:06:00.521732 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:01 crc kubenswrapper[4762]: I0308 01:06:01.032316 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548866-qzcd8"] Mar 08 01:06:01 crc kubenswrapper[4762]: I0308 01:06:01.033953 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:06:01 crc kubenswrapper[4762]: I0308 01:06:01.562495 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" event={"ID":"b5bcb215-1281-464a-aa1f-28099e754a1f","Type":"ContainerStarted","Data":"90860676d1422ad8a77742cfc6af7a31c5a75760553c488f9907f34cb22e40e4"} Mar 08 01:06:03 crc kubenswrapper[4762]: I0308 01:06:03.596524 4762 generic.go:334] "Generic (PLEG): container finished" podID="c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" containerID="81780de73bae7feda2ce1e462ac378b4934bb5d62ed0e2691e16306908e46cb2" exitCode=0 Mar 08 01:06:03 crc kubenswrapper[4762]: I0308 01:06:03.597305 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" event={"ID":"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853","Type":"ContainerDied","Data":"81780de73bae7feda2ce1e462ac378b4934bb5d62ed0e2691e16306908e46cb2"} Mar 08 01:06:03 crc kubenswrapper[4762]: I0308 01:06:03.602116 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" event={"ID":"b5bcb215-1281-464a-aa1f-28099e754a1f","Type":"ContainerStarted","Data":"58e44cf0459194542f7f94d28c1dae6895e99f2b71c8a82f73676bb174224f12"} Mar 08 01:06:03 crc kubenswrapper[4762]: I0308 01:06:03.661663 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" podStartSLOduration=1.729556177 podStartE2EDuration="3.661644092s" podCreationTimestamp="2026-03-08 01:06:00 +0000 UTC" firstStartedPulling="2026-03-08 01:06:01.03356741 +0000 UTC m=+2582.507711754" lastFinishedPulling="2026-03-08 01:06:02.965655315 +0000 UTC m=+2584.439799669" observedRunningTime="2026-03-08 01:06:03.649920041 +0000 UTC m=+2585.124064415" watchObservedRunningTime="2026-03-08 01:06:03.661644092 +0000 UTC m=+2585.135788446" Mar 08 01:06:04 crc kubenswrapper[4762]: I0308 01:06:04.618515 4762 generic.go:334] "Generic (PLEG): container finished" podID="b5bcb215-1281-464a-aa1f-28099e754a1f" containerID="58e44cf0459194542f7f94d28c1dae6895e99f2b71c8a82f73676bb174224f12" exitCode=0 Mar 08 01:06:04 crc kubenswrapper[4762]: I0308 01:06:04.619131 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" event={"ID":"b5bcb215-1281-464a-aa1f-28099e754a1f","Type":"ContainerDied","Data":"58e44cf0459194542f7f94d28c1dae6895e99f2b71c8a82f73676bb174224f12"} Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.103571 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255112 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzl4g\" (UniqueName: \"kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255191 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255349 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255427 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255505 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255574 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.255616 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle\") pod \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\" (UID: \"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853\") " Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.265337 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:06:05 crc kubenswrapper[4762]: E0308 01:06:05.266232 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.267360 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g" (OuterVolumeSpecName: "kube-api-access-hzl4g") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "kube-api-access-hzl4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.276012 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.295155 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.313684 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory" (OuterVolumeSpecName: "inventory") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.315511 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.316794 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.329779 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" (UID: "c4f0b3fa-3113-4b3a-8dc1-bf91b0968853"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363021 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363054 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363066 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363094 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363106 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzl4g\" (UniqueName: \"kubernetes.io/projected/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-kube-api-access-hzl4g\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363116 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.363124 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.633526 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" event={"ID":"c4f0b3fa-3113-4b3a-8dc1-bf91b0968853","Type":"ContainerDied","Data":"476a83c7594c46be75296c7a14f559a46c627d7c20099a02237ac898f6a72137"} Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.633598 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476a83c7594c46be75296c7a14f559a46c627d7c20099a02237ac898f6a72137" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.634007 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.755117 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4"] Mar 08 01:06:05 crc kubenswrapper[4762]: E0308 01:06:05.756011 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.756035 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.756277 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.757182 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.760308 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.760635 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.766089 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.766529 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.766603 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.771845 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4"] Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.874880 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.874977 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqcqn\" (UniqueName: \"kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.875030 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.875112 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.875299 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.875379 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.875407 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977039 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977091 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977111 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977139 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977164 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqcqn\" (UniqueName: \"kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977184 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.977228 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.982740 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.983777 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.984132 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.986025 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.986716 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.992121 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:05 crc kubenswrapper[4762]: I0308 01:06:05.994503 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqcqn\" (UniqueName: \"kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.082267 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.097382 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.284664 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8\") pod \"b5bcb215-1281-464a-aa1f-28099e754a1f\" (UID: \"b5bcb215-1281-464a-aa1f-28099e754a1f\") " Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.293663 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8" (OuterVolumeSpecName: "kube-api-access-h4pg8") pod "b5bcb215-1281-464a-aa1f-28099e754a1f" (UID: "b5bcb215-1281-464a-aa1f-28099e754a1f"). InnerVolumeSpecName "kube-api-access-h4pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.389750 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/b5bcb215-1281-464a-aa1f-28099e754a1f-kube-api-access-h4pg8\") on node \"crc\" DevicePath \"\"" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.649089 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" event={"ID":"b5bcb215-1281-464a-aa1f-28099e754a1f","Type":"ContainerDied","Data":"90860676d1422ad8a77742cfc6af7a31c5a75760553c488f9907f34cb22e40e4"} Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.649125 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548866-qzcd8" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.649134 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90860676d1422ad8a77742cfc6af7a31c5a75760553c488f9907f34cb22e40e4" Mar 08 01:06:06 crc kubenswrapper[4762]: I0308 01:06:06.690151 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4"] Mar 08 01:06:06 crc kubenswrapper[4762]: W0308 01:06:06.691931 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac67effd_cb96_48f9_ac06_fa24004495ae.slice/crio-b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2 WatchSource:0}: Error finding container b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2: Status 404 returned error can't find the container with id b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2 Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.199015 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548860-tmc6j"] Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.210347 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548860-tmc6j"] Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.284116 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0" path="/var/lib/kubelet/pods/7acb2fc8-94dd-4385-a5bf-1f024fb5e6e0/volumes" Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.661783 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" event={"ID":"ac67effd-cb96-48f9-ac06-fa24004495ae","Type":"ContainerStarted","Data":"c37d0497b5816748453bdb1644669a034cd8b90073d77323d65b566aa92c8ec1"} Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.662212 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" event={"ID":"ac67effd-cb96-48f9-ac06-fa24004495ae","Type":"ContainerStarted","Data":"b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2"} Mar 08 01:06:07 crc kubenswrapper[4762]: I0308 01:06:07.683435 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" podStartSLOduration=2.128476957 podStartE2EDuration="2.683404146s" podCreationTimestamp="2026-03-08 01:06:05 +0000 UTC" firstStartedPulling="2026-03-08 01:06:06.694142745 +0000 UTC m=+2588.168287089" lastFinishedPulling="2026-03-08 01:06:07.249069894 +0000 UTC m=+2588.723214278" observedRunningTime="2026-03-08 01:06:07.675967968 +0000 UTC m=+2589.150112322" watchObservedRunningTime="2026-03-08 01:06:07.683404146 +0000 UTC m=+2589.157548530" Mar 08 01:06:18 crc kubenswrapper[4762]: I0308 01:06:18.272068 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:06:18 crc kubenswrapper[4762]: E0308 01:06:18.275976 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:06:32 crc kubenswrapper[4762]: I0308 01:06:32.263803 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:06:32 crc kubenswrapper[4762]: E0308 01:06:32.264842 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:06:44 crc kubenswrapper[4762]: I0308 01:06:44.263545 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:06:44 crc kubenswrapper[4762]: E0308 01:06:44.264750 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:06:57 crc kubenswrapper[4762]: I0308 01:06:57.263633 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:06:57 crc kubenswrapper[4762]: E0308 01:06:57.265103 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:07:10 crc kubenswrapper[4762]: I0308 01:07:10.263885 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:07:10 crc kubenswrapper[4762]: E0308 01:07:10.265177 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:07:22 crc kubenswrapper[4762]: I0308 01:07:22.263705 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:07:22 crc kubenswrapper[4762]: E0308 01:07:22.264728 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:07:29 crc kubenswrapper[4762]: I0308 01:07:29.288660 4762 scope.go:117] "RemoveContainer" containerID="ba9d95096de9ea1f95ccfb9d633d6d4d8806c5065428090ee1a626a1812acce0" Mar 08 01:07:37 crc kubenswrapper[4762]: I0308 01:07:37.264467 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:07:37 crc kubenswrapper[4762]: E0308 01:07:37.265977 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:07:49 crc kubenswrapper[4762]: I0308 01:07:49.269250 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:07:49 crc kubenswrapper[4762]: E0308 01:07:49.270044 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.181194 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548868-v99mj"] Mar 08 01:08:00 crc kubenswrapper[4762]: E0308 01:08:00.182710 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5bcb215-1281-464a-aa1f-28099e754a1f" containerName="oc" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.182735 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5bcb215-1281-464a-aa1f-28099e754a1f" containerName="oc" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.183233 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5bcb215-1281-464a-aa1f-28099e754a1f" containerName="oc" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.184449 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.188669 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.188726 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.189162 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.199439 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548868-v99mj"] Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.226667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmnc2\" (UniqueName: \"kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2\") pod \"auto-csr-approver-29548868-v99mj\" (UID: \"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae\") " pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.328486 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmnc2\" (UniqueName: \"kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2\") pod \"auto-csr-approver-29548868-v99mj\" (UID: \"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae\") " pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.357963 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmnc2\" (UniqueName: \"kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2\") pod \"auto-csr-approver-29548868-v99mj\" (UID: \"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae\") " pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:00 crc kubenswrapper[4762]: I0308 01:08:00.526882 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:01 crc kubenswrapper[4762]: I0308 01:08:01.052243 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548868-v99mj"] Mar 08 01:08:01 crc kubenswrapper[4762]: I0308 01:08:01.192625 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548868-v99mj" event={"ID":"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae","Type":"ContainerStarted","Data":"c5f27725c352d7ec7825168ac1718ab5a292bf41b49096c5b7c9b5096a7b631e"} Mar 08 01:08:02 crc kubenswrapper[4762]: I0308 01:08:02.263406 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:08:02 crc kubenswrapper[4762]: E0308 01:08:02.264317 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:02 crc kubenswrapper[4762]: E0308 01:08:02.873648 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf90e8bfe_926a_4a3a_bbfd_7e46ed7767ae.slice/crio-conmon-9abb8ba4cbac2012bf57ab6390e8e0d1b0dd634b25780efb00a63fe4783ecc4d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf90e8bfe_926a_4a3a_bbfd_7e46ed7767ae.slice/crio-9abb8ba4cbac2012bf57ab6390e8e0d1b0dd634b25780efb00a63fe4783ecc4d.scope\": RecentStats: unable to find data in memory cache]" Mar 08 01:08:03 crc kubenswrapper[4762]: I0308 01:08:03.234295 4762 generic.go:334] "Generic (PLEG): container finished" podID="f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" containerID="9abb8ba4cbac2012bf57ab6390e8e0d1b0dd634b25780efb00a63fe4783ecc4d" exitCode=0 Mar 08 01:08:03 crc kubenswrapper[4762]: I0308 01:08:03.234398 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548868-v99mj" event={"ID":"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae","Type":"ContainerDied","Data":"9abb8ba4cbac2012bf57ab6390e8e0d1b0dd634b25780efb00a63fe4783ecc4d"} Mar 08 01:08:04 crc kubenswrapper[4762]: I0308 01:08:04.701722 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:04 crc kubenswrapper[4762]: I0308 01:08:04.898798 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmnc2\" (UniqueName: \"kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2\") pod \"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae\" (UID: \"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae\") " Mar 08 01:08:04 crc kubenswrapper[4762]: I0308 01:08:04.906055 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2" (OuterVolumeSpecName: "kube-api-access-wmnc2") pod "f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" (UID: "f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae"). InnerVolumeSpecName "kube-api-access-wmnc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.001867 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmnc2\" (UniqueName: \"kubernetes.io/projected/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae-kube-api-access-wmnc2\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.258447 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548868-v99mj" event={"ID":"f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae","Type":"ContainerDied","Data":"c5f27725c352d7ec7825168ac1718ab5a292bf41b49096c5b7c9b5096a7b631e"} Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.258490 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5f27725c352d7ec7825168ac1718ab5a292bf41b49096c5b7c9b5096a7b631e" Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.258548 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548868-v99mj" Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.807385 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548862-qvgrl"] Mar 08 01:08:05 crc kubenswrapper[4762]: I0308 01:08:05.820540 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548862-qvgrl"] Mar 08 01:08:07 crc kubenswrapper[4762]: I0308 01:08:07.291293 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1fd34ad-ad62-40c8-a342-fcafb93271a1" path="/var/lib/kubelet/pods/e1fd34ad-ad62-40c8-a342-fcafb93271a1/volumes" Mar 08 01:08:14 crc kubenswrapper[4762]: I0308 01:08:14.263746 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:08:14 crc kubenswrapper[4762]: E0308 01:08:14.264903 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:22 crc kubenswrapper[4762]: I0308 01:08:22.524215 4762 generic.go:334] "Generic (PLEG): container finished" podID="ac67effd-cb96-48f9-ac06-fa24004495ae" containerID="c37d0497b5816748453bdb1644669a034cd8b90073d77323d65b566aa92c8ec1" exitCode=0 Mar 08 01:08:22 crc kubenswrapper[4762]: I0308 01:08:22.524300 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" event={"ID":"ac67effd-cb96-48f9-ac06-fa24004495ae","Type":"ContainerDied","Data":"c37d0497b5816748453bdb1644669a034cd8b90073d77323d65b566aa92c8ec1"} Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.170729 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.285486 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.285580 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqcqn\" (UniqueName: \"kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.285732 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.286961 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.287021 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.287097 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.287246 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0\") pod \"ac67effd-cb96-48f9-ac06-fa24004495ae\" (UID: \"ac67effd-cb96-48f9-ac06-fa24004495ae\") " Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.293038 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.293600 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn" (OuterVolumeSpecName: "kube-api-access-lqcqn") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "kube-api-access-lqcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.323171 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.324814 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.330486 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory" (OuterVolumeSpecName: "inventory") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.339575 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.340155 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ac67effd-cb96-48f9-ac06-fa24004495ae" (UID: "ac67effd-cb96-48f9-ac06-fa24004495ae"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390041 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390077 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390094 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390107 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqcqn\" (UniqueName: \"kubernetes.io/projected/ac67effd-cb96-48f9-ac06-fa24004495ae-kube-api-access-lqcqn\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390119 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390133 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.390146 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/ac67effd-cb96-48f9-ac06-fa24004495ae-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.557406 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" event={"ID":"ac67effd-cb96-48f9-ac06-fa24004495ae","Type":"ContainerDied","Data":"b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2"} Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.557532 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b914fa5d533bf344a2e0040331194db265e8051e54aa64b82eacbae618ca08e2" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.557646 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.687655 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4"] Mar 08 01:08:24 crc kubenswrapper[4762]: E0308 01:08:24.690855 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac67effd-cb96-48f9-ac06-fa24004495ae" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.690892 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac67effd-cb96-48f9-ac06-fa24004495ae" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:08:24 crc kubenswrapper[4762]: E0308 01:08:24.690916 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" containerName="oc" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.690922 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" containerName="oc" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.691328 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" containerName="oc" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.691376 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac67effd-cb96-48f9-ac06-fa24004495ae" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.693517 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.696485 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.696685 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.696837 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.696982 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.697078 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.703101 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4"] Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.805562 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgkb\" (UniqueName: \"kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.805729 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.805944 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.805981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.806010 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.908437 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.908918 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.909209 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.909435 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.909812 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgkb\" (UniqueName: \"kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.913833 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.913942 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.918822 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.923258 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:24 crc kubenswrapper[4762]: I0308 01:08:24.928648 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgkb\" (UniqueName: \"kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb\") pod \"logging-edpm-deployment-openstack-edpm-ipam-pvpw4\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:25 crc kubenswrapper[4762]: I0308 01:08:25.023865 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:25 crc kubenswrapper[4762]: I0308 01:08:25.263991 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:08:25 crc kubenswrapper[4762]: E0308 01:08:25.264431 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:25 crc kubenswrapper[4762]: I0308 01:08:25.627006 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4"] Mar 08 01:08:26 crc kubenswrapper[4762]: I0308 01:08:26.574888 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" event={"ID":"f5592be0-d479-4dff-8f2d-b86453bd2697","Type":"ContainerStarted","Data":"1f5864d92bea8637cb7459ddab3ae69977942fe80491f64fe462799e1677194d"} Mar 08 01:08:26 crc kubenswrapper[4762]: I0308 01:08:26.575283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" event={"ID":"f5592be0-d479-4dff-8f2d-b86453bd2697","Type":"ContainerStarted","Data":"34c2cb885aca0fe9ea3491c8ab32815f9c84e26ce8b75fb41bf1db56b6fe6a27"} Mar 08 01:08:26 crc kubenswrapper[4762]: I0308 01:08:26.593823 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" podStartSLOduration=2.117946292 podStartE2EDuration="2.59379957s" podCreationTimestamp="2026-03-08 01:08:24 +0000 UTC" firstStartedPulling="2026-03-08 01:08:25.629451675 +0000 UTC m=+2727.103596019" lastFinishedPulling="2026-03-08 01:08:26.105304953 +0000 UTC m=+2727.579449297" observedRunningTime="2026-03-08 01:08:26.588571689 +0000 UTC m=+2728.062716063" watchObservedRunningTime="2026-03-08 01:08:26.59379957 +0000 UTC m=+2728.067943914" Mar 08 01:08:29 crc kubenswrapper[4762]: I0308 01:08:29.412726 4762 scope.go:117] "RemoveContainer" containerID="0366d6628fe2b7aa9666be4ebf602bd7cc0bd39bb348ce4b163efb01b47ba4c7" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.794345 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.800723 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.811029 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.875667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wr6k\" (UniqueName: \"kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.875770 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.875957 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.978671 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.978883 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wr6k\" (UniqueName: \"kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.978949 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.979139 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:31 crc kubenswrapper[4762]: I0308 01:08:31.979517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:32 crc kubenswrapper[4762]: I0308 01:08:32.010539 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wr6k\" (UniqueName: \"kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k\") pod \"redhat-operators-62tcx\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:32 crc kubenswrapper[4762]: I0308 01:08:32.136557 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:32 crc kubenswrapper[4762]: W0308 01:08:32.651570 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0986e8c3_99b2_4351_badf_bc6369aa7633.slice/crio-19d816c6efd3b22f697d0513ec492968e8cee3767615b4dfc19c06890b2204ea WatchSource:0}: Error finding container 19d816c6efd3b22f697d0513ec492968e8cee3767615b4dfc19c06890b2204ea: Status 404 returned error can't find the container with id 19d816c6efd3b22f697d0513ec492968e8cee3767615b4dfc19c06890b2204ea Mar 08 01:08:32 crc kubenswrapper[4762]: I0308 01:08:32.653786 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:33 crc kubenswrapper[4762]: I0308 01:08:33.671940 4762 generic.go:334] "Generic (PLEG): container finished" podID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerID="be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf" exitCode=0 Mar 08 01:08:33 crc kubenswrapper[4762]: I0308 01:08:33.671993 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerDied","Data":"be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf"} Mar 08 01:08:33 crc kubenswrapper[4762]: I0308 01:08:33.672024 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerStarted","Data":"19d816c6efd3b22f697d0513ec492968e8cee3767615b4dfc19c06890b2204ea"} Mar 08 01:08:34 crc kubenswrapper[4762]: I0308 01:08:34.683672 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerStarted","Data":"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862"} Mar 08 01:08:37 crc kubenswrapper[4762]: I0308 01:08:37.264449 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:08:37 crc kubenswrapper[4762]: E0308 01:08:37.265430 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:39 crc kubenswrapper[4762]: I0308 01:08:39.747527 4762 generic.go:334] "Generic (PLEG): container finished" podID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerID="6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862" exitCode=0 Mar 08 01:08:39 crc kubenswrapper[4762]: I0308 01:08:39.747603 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerDied","Data":"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862"} Mar 08 01:08:40 crc kubenswrapper[4762]: I0308 01:08:40.761098 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerStarted","Data":"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16"} Mar 08 01:08:40 crc kubenswrapper[4762]: I0308 01:08:40.810535 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-62tcx" podStartSLOduration=3.319755975 podStartE2EDuration="9.810514242s" podCreationTimestamp="2026-03-08 01:08:31 +0000 UTC" firstStartedPulling="2026-03-08 01:08:33.675337786 +0000 UTC m=+2735.149482130" lastFinishedPulling="2026-03-08 01:08:40.166096043 +0000 UTC m=+2741.640240397" observedRunningTime="2026-03-08 01:08:40.780918463 +0000 UTC m=+2742.255062807" watchObservedRunningTime="2026-03-08 01:08:40.810514242 +0000 UTC m=+2742.284658586" Mar 08 01:08:42 crc kubenswrapper[4762]: I0308 01:08:42.137345 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:42 crc kubenswrapper[4762]: I0308 01:08:42.137405 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:43 crc kubenswrapper[4762]: I0308 01:08:43.195822 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-62tcx" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="registry-server" probeResult="failure" output=< Mar 08 01:08:43 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:08:43 crc kubenswrapper[4762]: > Mar 08 01:08:45 crc kubenswrapper[4762]: I0308 01:08:45.826071 4762 generic.go:334] "Generic (PLEG): container finished" podID="f5592be0-d479-4dff-8f2d-b86453bd2697" containerID="1f5864d92bea8637cb7459ddab3ae69977942fe80491f64fe462799e1677194d" exitCode=0 Mar 08 01:08:45 crc kubenswrapper[4762]: I0308 01:08:45.826173 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" event={"ID":"f5592be0-d479-4dff-8f2d-b86453bd2697","Type":"ContainerDied","Data":"1f5864d92bea8637cb7459ddab3ae69977942fe80491f64fe462799e1677194d"} Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.416780 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.448686 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam\") pod \"f5592be0-d479-4dff-8f2d-b86453bd2697\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.448979 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgkb\" (UniqueName: \"kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb\") pod \"f5592be0-d479-4dff-8f2d-b86453bd2697\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.449016 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1\") pod \"f5592be0-d479-4dff-8f2d-b86453bd2697\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.449161 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory\") pod \"f5592be0-d479-4dff-8f2d-b86453bd2697\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.449200 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0\") pod \"f5592be0-d479-4dff-8f2d-b86453bd2697\" (UID: \"f5592be0-d479-4dff-8f2d-b86453bd2697\") " Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.463929 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb" (OuterVolumeSpecName: "kube-api-access-llgkb") pod "f5592be0-d479-4dff-8f2d-b86453bd2697" (UID: "f5592be0-d479-4dff-8f2d-b86453bd2697"). InnerVolumeSpecName "kube-api-access-llgkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.496576 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "f5592be0-d479-4dff-8f2d-b86453bd2697" (UID: "f5592be0-d479-4dff-8f2d-b86453bd2697"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.500451 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory" (OuterVolumeSpecName: "inventory") pod "f5592be0-d479-4dff-8f2d-b86453bd2697" (UID: "f5592be0-d479-4dff-8f2d-b86453bd2697"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.508255 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "f5592be0-d479-4dff-8f2d-b86453bd2697" (UID: "f5592be0-d479-4dff-8f2d-b86453bd2697"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.517649 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f5592be0-d479-4dff-8f2d-b86453bd2697" (UID: "f5592be0-d479-4dff-8f2d-b86453bd2697"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.558285 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.558346 4762 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.558390 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.558411 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgkb\" (UniqueName: \"kubernetes.io/projected/f5592be0-d479-4dff-8f2d-b86453bd2697-kube-api-access-llgkb\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.558434 4762 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f5592be0-d479-4dff-8f2d-b86453bd2697-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.854354 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" event={"ID":"f5592be0-d479-4dff-8f2d-b86453bd2697","Type":"ContainerDied","Data":"34c2cb885aca0fe9ea3491c8ab32815f9c84e26ce8b75fb41bf1db56b6fe6a27"} Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.854412 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c2cb885aca0fe9ea3491c8ab32815f9c84e26ce8b75fb41bf1db56b6fe6a27" Mar 08 01:08:47 crc kubenswrapper[4762]: I0308 01:08:47.854451 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4" Mar 08 01:08:52 crc kubenswrapper[4762]: I0308 01:08:52.189707 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:52 crc kubenswrapper[4762]: I0308 01:08:52.249682 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:52 crc kubenswrapper[4762]: I0308 01:08:52.263266 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:08:52 crc kubenswrapper[4762]: E0308 01:08:52.263686 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:08:52 crc kubenswrapper[4762]: I0308 01:08:52.433926 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:53 crc kubenswrapper[4762]: I0308 01:08:53.966944 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-62tcx" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="registry-server" containerID="cri-o://248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16" gracePeriod=2 Mar 08 01:08:54 crc kubenswrapper[4762]: E0308 01:08:54.319220 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0986e8c3_99b2_4351_badf_bc6369aa7633.slice/crio-conmon-248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0986e8c3_99b2_4351_badf_bc6369aa7633.slice/crio-248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16.scope\": RecentStats: unable to find data in memory cache]" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.531173 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.636045 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content\") pod \"0986e8c3-99b2-4351-badf-bc6369aa7633\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.636477 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wr6k\" (UniqueName: \"kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k\") pod \"0986e8c3-99b2-4351-badf-bc6369aa7633\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.636513 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities\") pod \"0986e8c3-99b2-4351-badf-bc6369aa7633\" (UID: \"0986e8c3-99b2-4351-badf-bc6369aa7633\") " Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.637433 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities" (OuterVolumeSpecName: "utilities") pod "0986e8c3-99b2-4351-badf-bc6369aa7633" (UID: "0986e8c3-99b2-4351-badf-bc6369aa7633"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.646234 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k" (OuterVolumeSpecName: "kube-api-access-9wr6k") pod "0986e8c3-99b2-4351-badf-bc6369aa7633" (UID: "0986e8c3-99b2-4351-badf-bc6369aa7633"). InnerVolumeSpecName "kube-api-access-9wr6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.739174 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wr6k\" (UniqueName: \"kubernetes.io/projected/0986e8c3-99b2-4351-badf-bc6369aa7633-kube-api-access-9wr6k\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.739206 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.786274 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0986e8c3-99b2-4351-badf-bc6369aa7633" (UID: "0986e8c3-99b2-4351-badf-bc6369aa7633"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.841999 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0986e8c3-99b2-4351-badf-bc6369aa7633-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.984173 4762 generic.go:334] "Generic (PLEG): container finished" podID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerID="248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16" exitCode=0 Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.984329 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62tcx" Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.984292 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerDied","Data":"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16"} Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.984465 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62tcx" event={"ID":"0986e8c3-99b2-4351-badf-bc6369aa7633","Type":"ContainerDied","Data":"19d816c6efd3b22f697d0513ec492968e8cee3767615b4dfc19c06890b2204ea"} Mar 08 01:08:54 crc kubenswrapper[4762]: I0308 01:08:54.984551 4762 scope.go:117] "RemoveContainer" containerID="248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.040513 4762 scope.go:117] "RemoveContainer" containerID="6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.059815 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.079870 4762 scope.go:117] "RemoveContainer" containerID="be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.085609 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-62tcx"] Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.133266 4762 scope.go:117] "RemoveContainer" containerID="248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16" Mar 08 01:08:55 crc kubenswrapper[4762]: E0308 01:08:55.133848 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16\": container with ID starting with 248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16 not found: ID does not exist" containerID="248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.133909 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16"} err="failed to get container status \"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16\": rpc error: code = NotFound desc = could not find container \"248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16\": container with ID starting with 248022d39cd79c8ea5578c1f573f93aa932abc25b7bab1e8bb69b3a86117dc16 not found: ID does not exist" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.133943 4762 scope.go:117] "RemoveContainer" containerID="6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862" Mar 08 01:08:55 crc kubenswrapper[4762]: E0308 01:08:55.134445 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862\": container with ID starting with 6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862 not found: ID does not exist" containerID="6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.134474 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862"} err="failed to get container status \"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862\": rpc error: code = NotFound desc = could not find container \"6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862\": container with ID starting with 6681fb8f26cc7838597b8308ba3b862e17c29c8997b2b299bea6962573b15862 not found: ID does not exist" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.134494 4762 scope.go:117] "RemoveContainer" containerID="be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf" Mar 08 01:08:55 crc kubenswrapper[4762]: E0308 01:08:55.134804 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf\": container with ID starting with be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf not found: ID does not exist" containerID="be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.134831 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf"} err="failed to get container status \"be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf\": rpc error: code = NotFound desc = could not find container \"be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf\": container with ID starting with be020d88ec6141974c4bd96fa56d45687a058fc23c15024753eeca05e77457cf not found: ID does not exist" Mar 08 01:08:55 crc kubenswrapper[4762]: I0308 01:08:55.282175 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" path="/var/lib/kubelet/pods/0986e8c3-99b2-4351-badf-bc6369aa7633/volumes" Mar 08 01:09:03 crc kubenswrapper[4762]: I0308 01:09:03.263705 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:09:03 crc kubenswrapper[4762]: E0308 01:09:03.264934 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:09:16 crc kubenswrapper[4762]: I0308 01:09:16.263509 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:09:16 crc kubenswrapper[4762]: E0308 01:09:16.264711 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:09:27 crc kubenswrapper[4762]: I0308 01:09:27.263199 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:09:27 crc kubenswrapper[4762]: E0308 01:09:27.264143 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:09:42 crc kubenswrapper[4762]: I0308 01:09:42.262927 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:09:42 crc kubenswrapper[4762]: E0308 01:09:42.263586 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:09:56 crc kubenswrapper[4762]: I0308 01:09:56.263914 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:09:56 crc kubenswrapper[4762]: I0308 01:09:56.756161 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630"} Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.158934 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548870-6zhvp"] Mar 08 01:10:00 crc kubenswrapper[4762]: E0308 01:10:00.160530 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="registry-server" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.160560 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="registry-server" Mar 08 01:10:00 crc kubenswrapper[4762]: E0308 01:10:00.160601 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5592be0-d479-4dff-8f2d-b86453bd2697" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.160614 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5592be0-d479-4dff-8f2d-b86453bd2697" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:10:00 crc kubenswrapper[4762]: E0308 01:10:00.160644 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="extract-content" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.160656 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="extract-content" Mar 08 01:10:00 crc kubenswrapper[4762]: E0308 01:10:00.160721 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="extract-utilities" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.160733 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="extract-utilities" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.161197 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5592be0-d479-4dff-8f2d-b86453bd2697" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.161236 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0986e8c3-99b2-4351-badf-bc6369aa7633" containerName="registry-server" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.162736 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.165242 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.165452 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.165566 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.172881 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548870-6zhvp"] Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.295461 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qh6\" (UniqueName: \"kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6\") pod \"auto-csr-approver-29548870-6zhvp\" (UID: \"f9b69cae-0a10-4319-a039-0332adac8b95\") " pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.398170 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qh6\" (UniqueName: \"kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6\") pod \"auto-csr-approver-29548870-6zhvp\" (UID: \"f9b69cae-0a10-4319-a039-0332adac8b95\") " pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.420912 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qh6\" (UniqueName: \"kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6\") pod \"auto-csr-approver-29548870-6zhvp\" (UID: \"f9b69cae-0a10-4319-a039-0332adac8b95\") " pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:00 crc kubenswrapper[4762]: I0308 01:10:00.495742 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:01 crc kubenswrapper[4762]: W0308 01:10:01.029981 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9b69cae_0a10_4319_a039_0332adac8b95.slice/crio-b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e WatchSource:0}: Error finding container b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e: Status 404 returned error can't find the container with id b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e Mar 08 01:10:01 crc kubenswrapper[4762]: I0308 01:10:01.055820 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548870-6zhvp"] Mar 08 01:10:01 crc kubenswrapper[4762]: I0308 01:10:01.809357 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" event={"ID":"f9b69cae-0a10-4319-a039-0332adac8b95","Type":"ContainerStarted","Data":"b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e"} Mar 08 01:10:02 crc kubenswrapper[4762]: I0308 01:10:02.822587 4762 generic.go:334] "Generic (PLEG): container finished" podID="f9b69cae-0a10-4319-a039-0332adac8b95" containerID="0d6633dd6ff33a751ea9966e9d56a58902f1948db5ec285f4d55fbd5e940d3e0" exitCode=0 Mar 08 01:10:02 crc kubenswrapper[4762]: I0308 01:10:02.822866 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" event={"ID":"f9b69cae-0a10-4319-a039-0332adac8b95","Type":"ContainerDied","Data":"0d6633dd6ff33a751ea9966e9d56a58902f1948db5ec285f4d55fbd5e940d3e0"} Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.374567 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.556855 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qh6\" (UniqueName: \"kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6\") pod \"f9b69cae-0a10-4319-a039-0332adac8b95\" (UID: \"f9b69cae-0a10-4319-a039-0332adac8b95\") " Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.566859 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6" (OuterVolumeSpecName: "kube-api-access-44qh6") pod "f9b69cae-0a10-4319-a039-0332adac8b95" (UID: "f9b69cae-0a10-4319-a039-0332adac8b95"). InnerVolumeSpecName "kube-api-access-44qh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.663483 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qh6\" (UniqueName: \"kubernetes.io/projected/f9b69cae-0a10-4319-a039-0332adac8b95-kube-api-access-44qh6\") on node \"crc\" DevicePath \"\"" Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.850510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" event={"ID":"f9b69cae-0a10-4319-a039-0332adac8b95","Type":"ContainerDied","Data":"b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e"} Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.850554 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b36472603290e17fac0bc34460d17acd06617f0c187807efcca70b147ee37b0e" Mar 08 01:10:04 crc kubenswrapper[4762]: I0308 01:10:04.850590 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548870-6zhvp" Mar 08 01:10:05 crc kubenswrapper[4762]: I0308 01:10:05.476510 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548864-rtrcn"] Mar 08 01:10:05 crc kubenswrapper[4762]: I0308 01:10:05.490837 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548864-rtrcn"] Mar 08 01:10:07 crc kubenswrapper[4762]: I0308 01:10:07.289740 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12bd746d-51ab-49e8-937b-8df6b580b687" path="/var/lib/kubelet/pods/12bd746d-51ab-49e8-937b-8df6b580b687/volumes" Mar 08 01:10:29 crc kubenswrapper[4762]: I0308 01:10:29.577100 4762 scope.go:117] "RemoveContainer" containerID="cfb0f5881196a1e4abd21a2e0a78db740f6235dae0c3e27c50deb9f382dfe92f" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.225118 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:40 crc kubenswrapper[4762]: E0308 01:10:40.226745 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9b69cae-0a10-4319-a039-0332adac8b95" containerName="oc" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.226808 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9b69cae-0a10-4319-a039-0332adac8b95" containerName="oc" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.227283 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9b69cae-0a10-4319-a039-0332adac8b95" containerName="oc" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.230257 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.255452 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.326444 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.326612 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.326657 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qch\" (UniqueName: \"kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.429001 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.429117 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76qch\" (UniqueName: \"kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.429197 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.429818 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.429868 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.453826 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76qch\" (UniqueName: \"kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch\") pod \"community-operators-v28pj\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:40 crc kubenswrapper[4762]: I0308 01:10:40.582102 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:41 crc kubenswrapper[4762]: I0308 01:10:41.117550 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:41 crc kubenswrapper[4762]: W0308 01:10:41.119955 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e122cd8_0f82_467b_9132_5962aca0254d.slice/crio-1f31b233dabf94065457e4693b5286d45156ce0f7f80a3fc20c97611937d1947 WatchSource:0}: Error finding container 1f31b233dabf94065457e4693b5286d45156ce0f7f80a3fc20c97611937d1947: Status 404 returned error can't find the container with id 1f31b233dabf94065457e4693b5286d45156ce0f7f80a3fc20c97611937d1947 Mar 08 01:10:41 crc kubenswrapper[4762]: I0308 01:10:41.326291 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerStarted","Data":"1f31b233dabf94065457e4693b5286d45156ce0f7f80a3fc20c97611937d1947"} Mar 08 01:10:42 crc kubenswrapper[4762]: I0308 01:10:42.342708 4762 generic.go:334] "Generic (PLEG): container finished" podID="3e122cd8-0f82-467b-9132-5962aca0254d" containerID="0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73" exitCode=0 Mar 08 01:10:42 crc kubenswrapper[4762]: I0308 01:10:42.342984 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerDied","Data":"0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73"} Mar 08 01:10:43 crc kubenswrapper[4762]: I0308 01:10:43.357266 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerStarted","Data":"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9"} Mar 08 01:10:45 crc kubenswrapper[4762]: I0308 01:10:45.388076 4762 generic.go:334] "Generic (PLEG): container finished" podID="3e122cd8-0f82-467b-9132-5962aca0254d" containerID="4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9" exitCode=0 Mar 08 01:10:45 crc kubenswrapper[4762]: I0308 01:10:45.388186 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerDied","Data":"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9"} Mar 08 01:10:46 crc kubenswrapper[4762]: I0308 01:10:46.400609 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerStarted","Data":"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd"} Mar 08 01:10:46 crc kubenswrapper[4762]: I0308 01:10:46.423914 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v28pj" podStartSLOduration=3.001615411 podStartE2EDuration="6.423895107s" podCreationTimestamp="2026-03-08 01:10:40 +0000 UTC" firstStartedPulling="2026-03-08 01:10:42.344876489 +0000 UTC m=+2863.819020853" lastFinishedPulling="2026-03-08 01:10:45.767156165 +0000 UTC m=+2867.241300549" observedRunningTime="2026-03-08 01:10:46.41731247 +0000 UTC m=+2867.891456814" watchObservedRunningTime="2026-03-08 01:10:46.423895107 +0000 UTC m=+2867.898039451" Mar 08 01:10:50 crc kubenswrapper[4762]: I0308 01:10:50.582837 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:50 crc kubenswrapper[4762]: I0308 01:10:50.583200 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:50 crc kubenswrapper[4762]: I0308 01:10:50.689613 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:51 crc kubenswrapper[4762]: I0308 01:10:51.539167 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:51 crc kubenswrapper[4762]: I0308 01:10:51.594398 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:53 crc kubenswrapper[4762]: I0308 01:10:53.494510 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v28pj" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="registry-server" containerID="cri-o://08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd" gracePeriod=2 Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.085519 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.204448 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities\") pod \"3e122cd8-0f82-467b-9132-5962aca0254d\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.204695 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76qch\" (UniqueName: \"kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch\") pod \"3e122cd8-0f82-467b-9132-5962aca0254d\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.204932 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content\") pod \"3e122cd8-0f82-467b-9132-5962aca0254d\" (UID: \"3e122cd8-0f82-467b-9132-5962aca0254d\") " Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.205711 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities" (OuterVolumeSpecName: "utilities") pod "3e122cd8-0f82-467b-9132-5962aca0254d" (UID: "3e122cd8-0f82-467b-9132-5962aca0254d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.210972 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch" (OuterVolumeSpecName: "kube-api-access-76qch") pod "3e122cd8-0f82-467b-9132-5962aca0254d" (UID: "3e122cd8-0f82-467b-9132-5962aca0254d"). InnerVolumeSpecName "kube-api-access-76qch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.276378 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e122cd8-0f82-467b-9132-5962aca0254d" (UID: "3e122cd8-0f82-467b-9132-5962aca0254d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.308039 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.308876 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e122cd8-0f82-467b-9132-5962aca0254d-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.308899 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76qch\" (UniqueName: \"kubernetes.io/projected/3e122cd8-0f82-467b-9132-5962aca0254d-kube-api-access-76qch\") on node \"crc\" DevicePath \"\"" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.508505 4762 generic.go:334] "Generic (PLEG): container finished" podID="3e122cd8-0f82-467b-9132-5962aca0254d" containerID="08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd" exitCode=0 Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.508553 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerDied","Data":"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd"} Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.508564 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v28pj" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.508586 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v28pj" event={"ID":"3e122cd8-0f82-467b-9132-5962aca0254d","Type":"ContainerDied","Data":"1f31b233dabf94065457e4693b5286d45156ce0f7f80a3fc20c97611937d1947"} Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.508606 4762 scope.go:117] "RemoveContainer" containerID="08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.546042 4762 scope.go:117] "RemoveContainer" containerID="4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.588967 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.602098 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v28pj"] Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.637165 4762 scope.go:117] "RemoveContainer" containerID="0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.664199 4762 scope.go:117] "RemoveContainer" containerID="08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd" Mar 08 01:10:54 crc kubenswrapper[4762]: E0308 01:10:54.664726 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd\": container with ID starting with 08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd not found: ID does not exist" containerID="08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.664789 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd"} err="failed to get container status \"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd\": rpc error: code = NotFound desc = could not find container \"08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd\": container with ID starting with 08838c48dbbd249d87b7aa903a8a4ffd6f5d8271fb521a3b14182c0d8a6d28cd not found: ID does not exist" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.664823 4762 scope.go:117] "RemoveContainer" containerID="4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9" Mar 08 01:10:54 crc kubenswrapper[4762]: E0308 01:10:54.665158 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9\": container with ID starting with 4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9 not found: ID does not exist" containerID="4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.665192 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9"} err="failed to get container status \"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9\": rpc error: code = NotFound desc = could not find container \"4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9\": container with ID starting with 4f080c56613562835600a686215eb9d86be04a44793efad40aba70f380f6f2a9 not found: ID does not exist" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.665216 4762 scope.go:117] "RemoveContainer" containerID="0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73" Mar 08 01:10:54 crc kubenswrapper[4762]: E0308 01:10:54.665648 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73\": container with ID starting with 0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73 not found: ID does not exist" containerID="0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73" Mar 08 01:10:54 crc kubenswrapper[4762]: I0308 01:10:54.665682 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73"} err="failed to get container status \"0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73\": rpc error: code = NotFound desc = could not find container \"0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73\": container with ID starting with 0dc4d945b68ff5fb0df6fab242f816f32223565f49823e57e88756ecc4956f73 not found: ID does not exist" Mar 08 01:10:55 crc kubenswrapper[4762]: I0308 01:10:55.275735 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" path="/var/lib/kubelet/pods/3e122cd8-0f82-467b-9132-5962aca0254d/volumes" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.070718 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:09 crc kubenswrapper[4762]: E0308 01:11:09.072117 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="extract-content" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.072141 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="extract-content" Mar 08 01:11:09 crc kubenswrapper[4762]: E0308 01:11:09.072181 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="extract-utilities" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.072194 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="extract-utilities" Mar 08 01:11:09 crc kubenswrapper[4762]: E0308 01:11:09.072244 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="registry-server" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.072257 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="registry-server" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.072637 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e122cd8-0f82-467b-9132-5962aca0254d" containerName="registry-server" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.075513 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.094205 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.142185 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfmc\" (UniqueName: \"kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.142553 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.142817 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.244639 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.244751 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfmc\" (UniqueName: \"kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.244914 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.245317 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.245538 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.268715 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfmc\" (UniqueName: \"kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc\") pod \"certified-operators-vjl5r\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.425516 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:09 crc kubenswrapper[4762]: I0308 01:11:09.983077 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:10 crc kubenswrapper[4762]: I0308 01:11:10.729914 4762 generic.go:334] "Generic (PLEG): container finished" podID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerID="374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762" exitCode=0 Mar 08 01:11:10 crc kubenswrapper[4762]: I0308 01:11:10.730009 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerDied","Data":"374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762"} Mar 08 01:11:10 crc kubenswrapper[4762]: I0308 01:11:10.730312 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerStarted","Data":"6d8cbfdb0806042c5a80bbe9b583a0438858b3c2c7e9132cd39959c568b3ce18"} Mar 08 01:11:10 crc kubenswrapper[4762]: I0308 01:11:10.732003 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:11:11 crc kubenswrapper[4762]: I0308 01:11:11.743707 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerStarted","Data":"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24"} Mar 08 01:11:13 crc kubenswrapper[4762]: I0308 01:11:13.768563 4762 generic.go:334] "Generic (PLEG): container finished" podID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerID="3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24" exitCode=0 Mar 08 01:11:13 crc kubenswrapper[4762]: I0308 01:11:13.768662 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerDied","Data":"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24"} Mar 08 01:11:14 crc kubenswrapper[4762]: I0308 01:11:14.784154 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerStarted","Data":"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8"} Mar 08 01:11:14 crc kubenswrapper[4762]: I0308 01:11:14.828914 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vjl5r" podStartSLOduration=2.417412626 podStartE2EDuration="5.828885074s" podCreationTimestamp="2026-03-08 01:11:09 +0000 UTC" firstStartedPulling="2026-03-08 01:11:10.731710288 +0000 UTC m=+2892.205854632" lastFinishedPulling="2026-03-08 01:11:14.143182736 +0000 UTC m=+2895.617327080" observedRunningTime="2026-03-08 01:11:14.812954636 +0000 UTC m=+2896.287098990" watchObservedRunningTime="2026-03-08 01:11:14.828885074 +0000 UTC m=+2896.303029458" Mar 08 01:11:19 crc kubenswrapper[4762]: I0308 01:11:19.426941 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:19 crc kubenswrapper[4762]: I0308 01:11:19.427803 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:19 crc kubenswrapper[4762]: I0308 01:11:19.503981 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:19 crc kubenswrapper[4762]: I0308 01:11:19.942480 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:19 crc kubenswrapper[4762]: I0308 01:11:19.993100 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:21 crc kubenswrapper[4762]: I0308 01:11:21.880926 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vjl5r" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="registry-server" containerID="cri-o://f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8" gracePeriod=2 Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.448574 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.586881 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities\") pod \"0705d86b-6ad9-4c40-8e62-e746c22e70de\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.587077 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfmc\" (UniqueName: \"kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc\") pod \"0705d86b-6ad9-4c40-8e62-e746c22e70de\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.587106 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content\") pod \"0705d86b-6ad9-4c40-8e62-e746c22e70de\" (UID: \"0705d86b-6ad9-4c40-8e62-e746c22e70de\") " Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.588217 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities" (OuterVolumeSpecName: "utilities") pod "0705d86b-6ad9-4c40-8e62-e746c22e70de" (UID: "0705d86b-6ad9-4c40-8e62-e746c22e70de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.595201 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc" (OuterVolumeSpecName: "kube-api-access-kkfmc") pod "0705d86b-6ad9-4c40-8e62-e746c22e70de" (UID: "0705d86b-6ad9-4c40-8e62-e746c22e70de"). InnerVolumeSpecName "kube-api-access-kkfmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.643357 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0705d86b-6ad9-4c40-8e62-e746c22e70de" (UID: "0705d86b-6ad9-4c40-8e62-e746c22e70de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.689676 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.689719 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfmc\" (UniqueName: \"kubernetes.io/projected/0705d86b-6ad9-4c40-8e62-e746c22e70de-kube-api-access-kkfmc\") on node \"crc\" DevicePath \"\"" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.689732 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0705d86b-6ad9-4c40-8e62-e746c22e70de-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.894443 4762 generic.go:334] "Generic (PLEG): container finished" podID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerID="f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8" exitCode=0 Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.894505 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerDied","Data":"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8"} Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.894553 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vjl5r" event={"ID":"0705d86b-6ad9-4c40-8e62-e746c22e70de","Type":"ContainerDied","Data":"6d8cbfdb0806042c5a80bbe9b583a0438858b3c2c7e9132cd39959c568b3ce18"} Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.894582 4762 scope.go:117] "RemoveContainer" containerID="f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.894640 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vjl5r" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.941984 4762 scope.go:117] "RemoveContainer" containerID="3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24" Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.952494 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.972622 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vjl5r"] Mar 08 01:11:22 crc kubenswrapper[4762]: I0308 01:11:22.974895 4762 scope.go:117] "RemoveContainer" containerID="374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.041556 4762 scope.go:117] "RemoveContainer" containerID="f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8" Mar 08 01:11:23 crc kubenswrapper[4762]: E0308 01:11:23.042080 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8\": container with ID starting with f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8 not found: ID does not exist" containerID="f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.042130 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8"} err="failed to get container status \"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8\": rpc error: code = NotFound desc = could not find container \"f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8\": container with ID starting with f37e411483f77262ba844b0ff8abe10c1941c98e4108a56da0cb6d832f7974d8 not found: ID does not exist" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.042165 4762 scope.go:117] "RemoveContainer" containerID="3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24" Mar 08 01:11:23 crc kubenswrapper[4762]: E0308 01:11:23.042629 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24\": container with ID starting with 3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24 not found: ID does not exist" containerID="3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.042668 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24"} err="failed to get container status \"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24\": rpc error: code = NotFound desc = could not find container \"3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24\": container with ID starting with 3c8cdd4e2339e75b3e843ee153079ccfcce0f7934f5abc2908080d968a0e2e24 not found: ID does not exist" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.042696 4762 scope.go:117] "RemoveContainer" containerID="374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762" Mar 08 01:11:23 crc kubenswrapper[4762]: E0308 01:11:23.043121 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762\": container with ID starting with 374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762 not found: ID does not exist" containerID="374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.043187 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762"} err="failed to get container status \"374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762\": rpc error: code = NotFound desc = could not find container \"374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762\": container with ID starting with 374e209d57eb4c87dc5ffd9614839e5d2ffb0976588b69e2c203d2b88aadf762 not found: ID does not exist" Mar 08 01:11:23 crc kubenswrapper[4762]: I0308 01:11:23.276987 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" path="/var/lib/kubelet/pods/0705d86b-6ad9-4c40-8e62-e746c22e70de/volumes" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.159999 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548872-fbs97"] Mar 08 01:12:00 crc kubenswrapper[4762]: E0308 01:12:00.162195 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="extract-utilities" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.162680 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="extract-utilities" Mar 08 01:12:00 crc kubenswrapper[4762]: E0308 01:12:00.162848 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="extract-content" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.163391 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="extract-content" Mar 08 01:12:00 crc kubenswrapper[4762]: E0308 01:12:00.163527 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="registry-server" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.163616 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="registry-server" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.164080 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0705d86b-6ad9-4c40-8e62-e746c22e70de" containerName="registry-server" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.165320 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.168920 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.168933 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.169664 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.178112 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548872-fbs97"] Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.220330 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fq4\" (UniqueName: \"kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4\") pod \"auto-csr-approver-29548872-fbs97\" (UID: \"b931a840-b0e8-4a49-8ba9-0c658c6fa13e\") " pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.322862 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fq4\" (UniqueName: \"kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4\") pod \"auto-csr-approver-29548872-fbs97\" (UID: \"b931a840-b0e8-4a49-8ba9-0c658c6fa13e\") " pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.347509 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fq4\" (UniqueName: \"kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4\") pod \"auto-csr-approver-29548872-fbs97\" (UID: \"b931a840-b0e8-4a49-8ba9-0c658c6fa13e\") " pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:00 crc kubenswrapper[4762]: I0308 01:12:00.494178 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:01 crc kubenswrapper[4762]: I0308 01:12:01.053770 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548872-fbs97"] Mar 08 01:12:01 crc kubenswrapper[4762]: W0308 01:12:01.054097 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb931a840_b0e8_4a49_8ba9_0c658c6fa13e.slice/crio-3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d WatchSource:0}: Error finding container 3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d: Status 404 returned error can't find the container with id 3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d Mar 08 01:12:01 crc kubenswrapper[4762]: I0308 01:12:01.411102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548872-fbs97" event={"ID":"b931a840-b0e8-4a49-8ba9-0c658c6fa13e","Type":"ContainerStarted","Data":"3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d"} Mar 08 01:12:02 crc kubenswrapper[4762]: I0308 01:12:02.423488 4762 generic.go:334] "Generic (PLEG): container finished" podID="b931a840-b0e8-4a49-8ba9-0c658c6fa13e" containerID="6649ed867da85a6a3139a03678e151fa00641067212d82c1b47624469309211b" exitCode=0 Mar 08 01:12:02 crc kubenswrapper[4762]: I0308 01:12:02.423571 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548872-fbs97" event={"ID":"b931a840-b0e8-4a49-8ba9-0c658c6fa13e","Type":"ContainerDied","Data":"6649ed867da85a6a3139a03678e151fa00641067212d82c1b47624469309211b"} Mar 08 01:12:03 crc kubenswrapper[4762]: I0308 01:12:03.828584 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:03 crc kubenswrapper[4762]: I0308 01:12:03.899751 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87fq4\" (UniqueName: \"kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4\") pod \"b931a840-b0e8-4a49-8ba9-0c658c6fa13e\" (UID: \"b931a840-b0e8-4a49-8ba9-0c658c6fa13e\") " Mar 08 01:12:03 crc kubenswrapper[4762]: I0308 01:12:03.908250 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4" (OuterVolumeSpecName: "kube-api-access-87fq4") pod "b931a840-b0e8-4a49-8ba9-0c658c6fa13e" (UID: "b931a840-b0e8-4a49-8ba9-0c658c6fa13e"). InnerVolumeSpecName "kube-api-access-87fq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.006437 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87fq4\" (UniqueName: \"kubernetes.io/projected/b931a840-b0e8-4a49-8ba9-0c658c6fa13e-kube-api-access-87fq4\") on node \"crc\" DevicePath \"\"" Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.453722 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548872-fbs97" event={"ID":"b931a840-b0e8-4a49-8ba9-0c658c6fa13e","Type":"ContainerDied","Data":"3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d"} Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.453830 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548872-fbs97" Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.453853 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b34933a557d824149de1437cb53bc3813201a5ae27b0ec75d1dfa79a6ec1f8d" Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.937789 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548866-qzcd8"] Mar 08 01:12:04 crc kubenswrapper[4762]: I0308 01:12:04.958843 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548866-qzcd8"] Mar 08 01:12:05 crc kubenswrapper[4762]: I0308 01:12:05.278923 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5bcb215-1281-464a-aa1f-28099e754a1f" path="/var/lib/kubelet/pods/b5bcb215-1281-464a-aa1f-28099e754a1f/volumes" Mar 08 01:12:12 crc kubenswrapper[4762]: I0308 01:12:12.851703 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:12:12 crc kubenswrapper[4762]: I0308 01:12:12.852576 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.309027 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:17 crc kubenswrapper[4762]: E0308 01:12:17.310999 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b931a840-b0e8-4a49-8ba9-0c658c6fa13e" containerName="oc" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.311032 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b931a840-b0e8-4a49-8ba9-0c658c6fa13e" containerName="oc" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.311657 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b931a840-b0e8-4a49-8ba9-0c658c6fa13e" containerName="oc" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.315429 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.336035 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.376556 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.376847 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl6jw\" (UniqueName: \"kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.376950 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.479009 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.479514 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hl6jw\" (UniqueName: \"kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.479743 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.480384 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.479979 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.511683 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hl6jw\" (UniqueName: \"kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw\") pod \"redhat-marketplace-8nzf9\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:17 crc kubenswrapper[4762]: I0308 01:12:17.666156 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:18 crc kubenswrapper[4762]: I0308 01:12:18.136544 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:18 crc kubenswrapper[4762]: I0308 01:12:18.649172 4762 generic.go:334] "Generic (PLEG): container finished" podID="d33d29fc-b654-4de3-a488-d45e12867575" containerID="24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6" exitCode=0 Mar 08 01:12:18 crc kubenswrapper[4762]: I0308 01:12:18.649260 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerDied","Data":"24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6"} Mar 08 01:12:18 crc kubenswrapper[4762]: I0308 01:12:18.649549 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerStarted","Data":"4a16b395fc63a0b3f1333d0deaf61091e6c28bd2f67aee22c4c47d61cfcbd206"} Mar 08 01:12:19 crc kubenswrapper[4762]: I0308 01:12:19.665623 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerStarted","Data":"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea"} Mar 08 01:12:20 crc kubenswrapper[4762]: I0308 01:12:20.685314 4762 generic.go:334] "Generic (PLEG): container finished" podID="d33d29fc-b654-4de3-a488-d45e12867575" containerID="07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea" exitCode=0 Mar 08 01:12:20 crc kubenswrapper[4762]: I0308 01:12:20.685416 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerDied","Data":"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea"} Mar 08 01:12:21 crc kubenswrapper[4762]: I0308 01:12:21.700518 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerStarted","Data":"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf"} Mar 08 01:12:21 crc kubenswrapper[4762]: I0308 01:12:21.757034 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8nzf9" podStartSLOduration=2.122421137 podStartE2EDuration="4.757002892s" podCreationTimestamp="2026-03-08 01:12:17 +0000 UTC" firstStartedPulling="2026-03-08 01:12:18.65109226 +0000 UTC m=+2960.125236604" lastFinishedPulling="2026-03-08 01:12:21.285674005 +0000 UTC m=+2962.759818359" observedRunningTime="2026-03-08 01:12:21.72790883 +0000 UTC m=+2963.202053234" watchObservedRunningTime="2026-03-08 01:12:21.757002892 +0000 UTC m=+2963.231147276" Mar 08 01:12:27 crc kubenswrapper[4762]: I0308 01:12:27.666979 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:27 crc kubenswrapper[4762]: I0308 01:12:27.667691 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:27 crc kubenswrapper[4762]: I0308 01:12:27.720855 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:27 crc kubenswrapper[4762]: I0308 01:12:27.873007 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:27 crc kubenswrapper[4762]: I0308 01:12:27.965469 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:29 crc kubenswrapper[4762]: I0308 01:12:29.757016 4762 scope.go:117] "RemoveContainer" containerID="58e44cf0459194542f7f94d28c1dae6895e99f2b71c8a82f73676bb174224f12" Mar 08 01:12:29 crc kubenswrapper[4762]: I0308 01:12:29.822990 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8nzf9" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="registry-server" containerID="cri-o://540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf" gracePeriod=2 Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.372970 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.495493 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities\") pod \"d33d29fc-b654-4de3-a488-d45e12867575\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.495623 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content\") pod \"d33d29fc-b654-4de3-a488-d45e12867575\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.496430 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hl6jw\" (UniqueName: \"kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw\") pod \"d33d29fc-b654-4de3-a488-d45e12867575\" (UID: \"d33d29fc-b654-4de3-a488-d45e12867575\") " Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.496553 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities" (OuterVolumeSpecName: "utilities") pod "d33d29fc-b654-4de3-a488-d45e12867575" (UID: "d33d29fc-b654-4de3-a488-d45e12867575"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.497115 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.506555 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw" (OuterVolumeSpecName: "kube-api-access-hl6jw") pod "d33d29fc-b654-4de3-a488-d45e12867575" (UID: "d33d29fc-b654-4de3-a488-d45e12867575"). InnerVolumeSpecName "kube-api-access-hl6jw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.523411 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d33d29fc-b654-4de3-a488-d45e12867575" (UID: "d33d29fc-b654-4de3-a488-d45e12867575"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.600188 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d33d29fc-b654-4de3-a488-d45e12867575-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.600259 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hl6jw\" (UniqueName: \"kubernetes.io/projected/d33d29fc-b654-4de3-a488-d45e12867575-kube-api-access-hl6jw\") on node \"crc\" DevicePath \"\"" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.841435 4762 generic.go:334] "Generic (PLEG): container finished" podID="d33d29fc-b654-4de3-a488-d45e12867575" containerID="540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf" exitCode=0 Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.841476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerDied","Data":"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf"} Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.841503 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8nzf9" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.841524 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8nzf9" event={"ID":"d33d29fc-b654-4de3-a488-d45e12867575","Type":"ContainerDied","Data":"4a16b395fc63a0b3f1333d0deaf61091e6c28bd2f67aee22c4c47d61cfcbd206"} Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.841567 4762 scope.go:117] "RemoveContainer" containerID="540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.879736 4762 scope.go:117] "RemoveContainer" containerID="07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.930526 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.930877 4762 scope.go:117] "RemoveContainer" containerID="24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.952533 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8nzf9"] Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.960940 4762 scope.go:117] "RemoveContainer" containerID="540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf" Mar 08 01:12:30 crc kubenswrapper[4762]: E0308 01:12:30.961386 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf\": container with ID starting with 540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf not found: ID does not exist" containerID="540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.961414 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf"} err="failed to get container status \"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf\": rpc error: code = NotFound desc = could not find container \"540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf\": container with ID starting with 540089dd5035910601aad3d29cec0f02786025cdedf0155687550407e8bb69cf not found: ID does not exist" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.961437 4762 scope.go:117] "RemoveContainer" containerID="07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea" Mar 08 01:12:30 crc kubenswrapper[4762]: E0308 01:12:30.961700 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea\": container with ID starting with 07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea not found: ID does not exist" containerID="07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.961720 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea"} err="failed to get container status \"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea\": rpc error: code = NotFound desc = could not find container \"07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea\": container with ID starting with 07bc630e3e2b8382befe4be18e9f9fa4b34c2434713fd3a28924a6ae17a80fea not found: ID does not exist" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.961731 4762 scope.go:117] "RemoveContainer" containerID="24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6" Mar 08 01:12:30 crc kubenswrapper[4762]: E0308 01:12:30.962118 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6\": container with ID starting with 24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6 not found: ID does not exist" containerID="24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6" Mar 08 01:12:30 crc kubenswrapper[4762]: I0308 01:12:30.962137 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6"} err="failed to get container status \"24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6\": rpc error: code = NotFound desc = could not find container \"24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6\": container with ID starting with 24f2fdec450a7717d3719a45746c709e8c8e1e2843c019e1bc8937caac57e8a6 not found: ID does not exist" Mar 08 01:12:31 crc kubenswrapper[4762]: I0308 01:12:31.292039 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33d29fc-b654-4de3-a488-d45e12867575" path="/var/lib/kubelet/pods/d33d29fc-b654-4de3-a488-d45e12867575/volumes" Mar 08 01:12:42 crc kubenswrapper[4762]: I0308 01:12:42.852052 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:12:42 crc kubenswrapper[4762]: I0308 01:12:42.852743 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.555820 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.573536 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.592266 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.602513 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.612395 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.622326 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-pvpw4"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.630597 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jz2x6"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.639134 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.647878 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-6gdrn"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.656987 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.664943 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.672428 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.679604 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gs2b4"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.687216 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7rzcn"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.694467 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6rgx8"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.704218 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.713642 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-lcg5f"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.723530 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.733561 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-9n4jk"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.741566 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.749545 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-hsj4j"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.759791 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-27hkr"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.768060 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.777097 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.789026 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.807150 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-tmfjc"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.823514 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-6rgx8"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.834068 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qrxpk"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.843135 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bhcn8"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.853414 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-hbrxc"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.862330 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tgsql"] Mar 08 01:12:43 crc kubenswrapper[4762]: I0308 01:12:43.871053 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-84jzb"] Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.278916 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="068a0247-3a6f-4505-9574-deba254e56f0" path="/var/lib/kubelet/pods/068a0247-3a6f-4505-9574-deba254e56f0/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.280274 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bb7150b-e4a5-435a-a306-e82bd036f781" path="/var/lib/kubelet/pods/0bb7150b-e4a5-435a-a306-e82bd036f781/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.281453 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27822805-9c82-4baf-b7ce-b0c00c0e335b" path="/var/lib/kubelet/pods/27822805-9c82-4baf-b7ce-b0c00c0e335b/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.282829 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4088c015-f583-40c0-be7c-2ee7305a0dcc" path="/var/lib/kubelet/pods/4088c015-f583-40c0-be7c-2ee7305a0dcc/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.285043 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411b0d7b-2d67-4965-adc7-386c2a0a4e69" path="/var/lib/kubelet/pods/411b0d7b-2d67-4965-adc7-386c2a0a4e69/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.286159 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dca332a-7c7d-448c-b866-727fb88ea870" path="/var/lib/kubelet/pods/4dca332a-7c7d-448c-b866-727fb88ea870/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.287554 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772ef692-515e-40a3-b0c7-f3f78e3620c1" path="/var/lib/kubelet/pods/772ef692-515e-40a3-b0c7-f3f78e3620c1/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.290824 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac67effd-cb96-48f9-ac06-fa24004495ae" path="/var/lib/kubelet/pods/ac67effd-cb96-48f9-ac06-fa24004495ae/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.292260 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba04a642-51e5-447c-b31c-fa0b5de485f0" path="/var/lib/kubelet/pods/ba04a642-51e5-447c-b31c-fa0b5de485f0/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.293689 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f0b3fa-3113-4b3a-8dc1-bf91b0968853" path="/var/lib/kubelet/pods/c4f0b3fa-3113-4b3a-8dc1-bf91b0968853/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.296263 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c58b70cc-254f-4a6f-9acc-df7b1852f7d6" path="/var/lib/kubelet/pods/c58b70cc-254f-4a6f-9acc-df7b1852f7d6/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.296994 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45e1aae-1e35-43f5-95bb-b9bc4750eb9c" path="/var/lib/kubelet/pods/d45e1aae-1e35-43f5-95bb-b9bc4750eb9c/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.297605 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95" path="/var/lib/kubelet/pods/d66bfa70-7ce9-4bd3-9ca5-43f7cc9ccd95/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.298156 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5592be0-d479-4dff-8f2d-b86453bd2697" path="/var/lib/kubelet/pods/f5592be0-d479-4dff-8f2d-b86453bd2697/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.299092 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7494b8e-e16c-482d-8fc3-59736f59c318" path="/var/lib/kubelet/pods/f7494b8e-e16c-482d-8fc3-59736f59c318/volumes" Mar 08 01:12:45 crc kubenswrapper[4762]: I0308 01:12:45.299575 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f" path="/var/lib/kubelet/pods/fef23ddb-2fb9-4ae8-b7d0-5137b4b15e8f/volumes" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.959005 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf"] Mar 08 01:12:47 crc kubenswrapper[4762]: E0308 01:12:47.959894 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="extract-utilities" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.959912 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="extract-utilities" Mar 08 01:12:47 crc kubenswrapper[4762]: E0308 01:12:47.959929 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="registry-server" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.959937 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="registry-server" Mar 08 01:12:47 crc kubenswrapper[4762]: E0308 01:12:47.959947 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="extract-content" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.959954 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="extract-content" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.960196 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33d29fc-b654-4de3-a488-d45e12867575" containerName="registry-server" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.960989 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.963038 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.963045 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.963541 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.963630 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.973805 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:12:47 crc kubenswrapper[4762]: I0308 01:12:47.982634 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf"] Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.025788 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.025857 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.025895 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qtns\" (UniqueName: \"kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.025980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.026081 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.128322 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.128443 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.128484 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.128517 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qtns\" (UniqueName: \"kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.128601 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.134787 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.135277 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.136103 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.136839 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.154036 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qtns\" (UniqueName: \"kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.299295 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:12:48 crc kubenswrapper[4762]: I0308 01:12:48.894714 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf"] Mar 08 01:12:49 crc kubenswrapper[4762]: I0308 01:12:49.112145 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" event={"ID":"6833455d-3ceb-4dc6-9722-641f0b1dc40c","Type":"ContainerStarted","Data":"aafa9e1cb0f8af78366c2f7f445a77ac4a589056d29885ec0b527a4201d16f5c"} Mar 08 01:12:50 crc kubenswrapper[4762]: I0308 01:12:50.128932 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" event={"ID":"6833455d-3ceb-4dc6-9722-641f0b1dc40c","Type":"ContainerStarted","Data":"712b42178b3f150c844ab01eb0fec456e0ea623097a5512e20a53ee7360dd4cb"} Mar 08 01:12:50 crc kubenswrapper[4762]: I0308 01:12:50.158557 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" podStartSLOduration=2.450840023 podStartE2EDuration="3.15853897s" podCreationTimestamp="2026-03-08 01:12:47 +0000 UTC" firstStartedPulling="2026-03-08 01:12:48.896624261 +0000 UTC m=+2990.370768605" lastFinishedPulling="2026-03-08 01:12:49.604323168 +0000 UTC m=+2991.078467552" observedRunningTime="2026-03-08 01:12:50.15662297 +0000 UTC m=+2991.630767394" watchObservedRunningTime="2026-03-08 01:12:50.15853897 +0000 UTC m=+2991.632683314" Mar 08 01:13:02 crc kubenswrapper[4762]: I0308 01:13:02.323476 4762 generic.go:334] "Generic (PLEG): container finished" podID="6833455d-3ceb-4dc6-9722-641f0b1dc40c" containerID="712b42178b3f150c844ab01eb0fec456e0ea623097a5512e20a53ee7360dd4cb" exitCode=0 Mar 08 01:13:02 crc kubenswrapper[4762]: I0308 01:13:02.323557 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" event={"ID":"6833455d-3ceb-4dc6-9722-641f0b1dc40c","Type":"ContainerDied","Data":"712b42178b3f150c844ab01eb0fec456e0ea623097a5512e20a53ee7360dd4cb"} Mar 08 01:13:03 crc kubenswrapper[4762]: I0308 01:13:03.889489 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.039728 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qtns\" (UniqueName: \"kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns\") pod \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.039847 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory\") pod \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.040141 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph\") pod \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.040190 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam\") pod \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.040365 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle\") pod \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\" (UID: \"6833455d-3ceb-4dc6-9722-641f0b1dc40c\") " Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.046601 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns" (OuterVolumeSpecName: "kube-api-access-4qtns") pod "6833455d-3ceb-4dc6-9722-641f0b1dc40c" (UID: "6833455d-3ceb-4dc6-9722-641f0b1dc40c"). InnerVolumeSpecName "kube-api-access-4qtns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.059153 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6833455d-3ceb-4dc6-9722-641f0b1dc40c" (UID: "6833455d-3ceb-4dc6-9722-641f0b1dc40c"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.059306 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph" (OuterVolumeSpecName: "ceph") pod "6833455d-3ceb-4dc6-9722-641f0b1dc40c" (UID: "6833455d-3ceb-4dc6-9722-641f0b1dc40c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.077976 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6833455d-3ceb-4dc6-9722-641f0b1dc40c" (UID: "6833455d-3ceb-4dc6-9722-641f0b1dc40c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.084742 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory" (OuterVolumeSpecName: "inventory") pod "6833455d-3ceb-4dc6-9722-641f0b1dc40c" (UID: "6833455d-3ceb-4dc6-9722-641f0b1dc40c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.144364 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qtns\" (UniqueName: \"kubernetes.io/projected/6833455d-3ceb-4dc6-9722-641f0b1dc40c-kube-api-access-4qtns\") on node \"crc\" DevicePath \"\"" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.144404 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.144423 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.144445 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.144464 4762 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6833455d-3ceb-4dc6-9722-641f0b1dc40c-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.352618 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" event={"ID":"6833455d-3ceb-4dc6-9722-641f0b1dc40c","Type":"ContainerDied","Data":"aafa9e1cb0f8af78366c2f7f445a77ac4a589056d29885ec0b527a4201d16f5c"} Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.352680 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafa9e1cb0f8af78366c2f7f445a77ac4a589056d29885ec0b527a4201d16f5c" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.352739 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.443875 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6"] Mar 08 01:13:04 crc kubenswrapper[4762]: E0308 01:13:04.444979 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6833455d-3ceb-4dc6-9722-641f0b1dc40c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.445011 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6833455d-3ceb-4dc6-9722-641f0b1dc40c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.445382 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6833455d-3ceb-4dc6-9722-641f0b1dc40c" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.446503 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.449490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.450790 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.451529 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.454202 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.454447 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.460591 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6"] Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.553120 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.553222 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrt2q\" (UniqueName: \"kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.553393 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.553436 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.553671 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.656297 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.656751 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.656882 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrt2q\" (UniqueName: \"kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.656933 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.656996 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.665372 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.665396 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.665379 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.665910 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.672284 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrt2q\" (UniqueName: \"kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:04 crc kubenswrapper[4762]: I0308 01:13:04.766150 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:13:05 crc kubenswrapper[4762]: I0308 01:13:05.347592 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6"] Mar 08 01:13:05 crc kubenswrapper[4762]: I0308 01:13:05.363258 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" event={"ID":"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a","Type":"ContainerStarted","Data":"0158a9b0edf75fe7ffaf9f9358b3b724adda4c443f38b1f891db21c5ced3d500"} Mar 08 01:13:06 crc kubenswrapper[4762]: I0308 01:13:06.385121 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" event={"ID":"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a","Type":"ContainerStarted","Data":"85bae656308f833998be52a63f088d8da642597b31502886ac52d00c25462dec"} Mar 08 01:13:06 crc kubenswrapper[4762]: I0308 01:13:06.429113 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" podStartSLOduration=2.006414543 podStartE2EDuration="2.429075996s" podCreationTimestamp="2026-03-08 01:13:04 +0000 UTC" firstStartedPulling="2026-03-08 01:13:05.35216303 +0000 UTC m=+3006.826307374" lastFinishedPulling="2026-03-08 01:13:05.774824443 +0000 UTC m=+3007.248968827" observedRunningTime="2026-03-08 01:13:06.405700014 +0000 UTC m=+3007.879844398" watchObservedRunningTime="2026-03-08 01:13:06.429075996 +0000 UTC m=+3007.903220380" Mar 08 01:13:12 crc kubenswrapper[4762]: I0308 01:13:12.851293 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:13:12 crc kubenswrapper[4762]: I0308 01:13:12.851945 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:13:12 crc kubenswrapper[4762]: I0308 01:13:12.851995 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:13:12 crc kubenswrapper[4762]: I0308 01:13:12.852800 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:13:12 crc kubenswrapper[4762]: I0308 01:13:12.852846 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630" gracePeriod=600 Mar 08 01:13:13 crc kubenswrapper[4762]: I0308 01:13:13.466826 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630" exitCode=0 Mar 08 01:13:13 crc kubenswrapper[4762]: I0308 01:13:13.466891 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630"} Mar 08 01:13:13 crc kubenswrapper[4762]: I0308 01:13:13.467190 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499"} Mar 08 01:13:13 crc kubenswrapper[4762]: I0308 01:13:13.467221 4762 scope.go:117] "RemoveContainer" containerID="107d19fda0424ea7c5aebb8927bf739a718453f1585afbd02f93ecb4b160477a" Mar 08 01:13:29 crc kubenswrapper[4762]: I0308 01:13:29.881638 4762 scope.go:117] "RemoveContainer" containerID="9445e6731b8cc64cd2ee5717b3a60d317c440fadd974f0afb285575a41aa6852" Mar 08 01:13:29 crc kubenswrapper[4762]: I0308 01:13:29.954136 4762 scope.go:117] "RemoveContainer" containerID="fef10f330f5c8970c7c76876467aa0318eb2405db2cbcea6f9cf2e9df57839cb" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.009680 4762 scope.go:117] "RemoveContainer" containerID="50868e2a127a5f0f72b44a3e3c68c266f66285e9ded8b6455269ecae1896ad9d" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.061105 4762 scope.go:117] "RemoveContainer" containerID="dc4549bde0c76d0e63be61eb119d11e8c13a31ce950e6b9e5bfc2989e06fe218" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.115436 4762 scope.go:117] "RemoveContainer" containerID="c37d0497b5816748453bdb1644669a034cd8b90073d77323d65b566aa92c8ec1" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.210690 4762 scope.go:117] "RemoveContainer" containerID="5b0d377b56b84720b4914ca6ccfc7d4deb851450328e08083f2a31139a7fe615" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.248807 4762 scope.go:117] "RemoveContainer" containerID="7f579d220af391252c54aa64ff17a3692d7c5a59e5186291213fe81aa0af5c1c" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.283871 4762 scope.go:117] "RemoveContainer" containerID="e4c8cacf8e690119b0a31da33eaf213b8270968ee8ea58f0d2e10feb8e82dea6" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.324317 4762 scope.go:117] "RemoveContainer" containerID="f0543878de1eca384d6d1d0937d3dd5df14764b02bdf3f2442e53684164312c3" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.374241 4762 scope.go:117] "RemoveContainer" containerID="81780de73bae7feda2ce1e462ac378b4934bb5d62ed0e2691e16306908e46cb2" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.482592 4762 scope.go:117] "RemoveContainer" containerID="5605e7c042eea99fd7e944ec47c882b2a0e8acee46eefa2adc5469b69bdea8db" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.527693 4762 scope.go:117] "RemoveContainer" containerID="9f20ee537496d518f89b00f6e185be15e6da16a3af71dfefefc8aef0a5315496" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.601895 4762 scope.go:117] "RemoveContainer" containerID="058ad8c33b3f2044d297935fdeb9363ebf7e50375dc481e60b982235ab041ac8" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.645774 4762 scope.go:117] "RemoveContainer" containerID="11a0161501ea83561acb466435b786fcc1549249d9a9203eef101a74e84b87aa" Mar 08 01:13:30 crc kubenswrapper[4762]: I0308 01:13:30.702381 4762 scope.go:117] "RemoveContainer" containerID="95d0c771f33c0d02f30b67f5a10c306de01d2673ecb0d4aeaa23155117a627b0" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.165080 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548874-5bw9s"] Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.168297 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.171010 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.171653 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.171882 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.178071 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548874-5bw9s"] Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.190710 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwzv\" (UniqueName: \"kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv\") pod \"auto-csr-approver-29548874-5bw9s\" (UID: \"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a\") " pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.293479 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwzv\" (UniqueName: \"kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv\") pod \"auto-csr-approver-29548874-5bw9s\" (UID: \"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a\") " pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.322205 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwzv\" (UniqueName: \"kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv\") pod \"auto-csr-approver-29548874-5bw9s\" (UID: \"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a\") " pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:00 crc kubenswrapper[4762]: I0308 01:14:00.510756 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:01 crc kubenswrapper[4762]: I0308 01:14:01.028467 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548874-5bw9s"] Mar 08 01:14:01 crc kubenswrapper[4762]: I0308 01:14:01.132050 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" event={"ID":"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a","Type":"ContainerStarted","Data":"c92a6dc6f452da14540165b987ebc671cac3977a8c557f3b173b31bb9bd39299"} Mar 08 01:14:08 crc kubenswrapper[4762]: I0308 01:14:08.634239 4762 generic.go:334] "Generic (PLEG): container finished" podID="36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" containerID="5fa92b998a5ea28268e54d25f38f6b6bb737cd74179018b7e6339df3b2674863" exitCode=0 Mar 08 01:14:08 crc kubenswrapper[4762]: I0308 01:14:08.634397 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" event={"ID":"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a","Type":"ContainerDied","Data":"5fa92b998a5ea28268e54d25f38f6b6bb737cd74179018b7e6339df3b2674863"} Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.109381 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.244389 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldwzv\" (UniqueName: \"kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv\") pod \"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a\" (UID: \"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a\") " Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.254172 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv" (OuterVolumeSpecName: "kube-api-access-ldwzv") pod "36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" (UID: "36d4d8ff-7a0f-4055-8647-47cbac3d4d6a"). InnerVolumeSpecName "kube-api-access-ldwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.347584 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldwzv\" (UniqueName: \"kubernetes.io/projected/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a-kube-api-access-ldwzv\") on node \"crc\" DevicePath \"\"" Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.665389 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" event={"ID":"36d4d8ff-7a0f-4055-8647-47cbac3d4d6a","Type":"ContainerDied","Data":"c92a6dc6f452da14540165b987ebc671cac3977a8c557f3b173b31bb9bd39299"} Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.665435 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92a6dc6f452da14540165b987ebc671cac3977a8c557f3b173b31bb9bd39299" Mar 08 01:14:10 crc kubenswrapper[4762]: I0308 01:14:10.665492 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548874-5bw9s" Mar 08 01:14:11 crc kubenswrapper[4762]: I0308 01:14:11.209628 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548868-v99mj"] Mar 08 01:14:11 crc kubenswrapper[4762]: I0308 01:14:11.227709 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548868-v99mj"] Mar 08 01:14:11 crc kubenswrapper[4762]: I0308 01:14:11.285211 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae" path="/var/lib/kubelet/pods/f90e8bfe-926a-4a3a-bbfd-7e46ed7767ae/volumes" Mar 08 01:14:31 crc kubenswrapper[4762]: I0308 01:14:31.035674 4762 scope.go:117] "RemoveContainer" containerID="1f5864d92bea8637cb7459ddab3ae69977942fe80491f64fe462799e1677194d" Mar 08 01:14:31 crc kubenswrapper[4762]: I0308 01:14:31.076362 4762 scope.go:117] "RemoveContainer" containerID="9abb8ba4cbac2012bf57ab6390e8e0d1b0dd634b25780efb00a63fe4783ecc4d" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.157633 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb"] Mar 08 01:15:00 crc kubenswrapper[4762]: E0308 01:15:00.159166 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" containerName="oc" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.159192 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" containerName="oc" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.159728 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" containerName="oc" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.161453 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.164354 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.165114 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.169715 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb"] Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.283400 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.283772 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.283836 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqr9r\" (UniqueName: \"kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.385643 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.385751 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.385845 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqr9r\" (UniqueName: \"kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.386578 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.396613 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.415296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqr9r\" (UniqueName: \"kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r\") pod \"collect-profiles-29548875-7blfb\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:00 crc kubenswrapper[4762]: I0308 01:15:00.493961 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:03 crc kubenswrapper[4762]: I0308 01:15:03.855537 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb"] Mar 08 01:15:04 crc kubenswrapper[4762]: I0308 01:15:04.597283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" event={"ID":"87a93ec2-8a4a-4bb4-9f65-1265d565d052","Type":"ContainerStarted","Data":"1330a302434097ea913c3789dba8d29b2229d846ee0470af806540508866f4d4"} Mar 08 01:15:05 crc kubenswrapper[4762]: I0308 01:15:05.609187 4762 generic.go:334] "Generic (PLEG): container finished" podID="87a93ec2-8a4a-4bb4-9f65-1265d565d052" containerID="923c7f782026dcf1f69cbd8aafac33085cc6b3afed33d78395234ad8cf778ffa" exitCode=0 Mar 08 01:15:05 crc kubenswrapper[4762]: I0308 01:15:05.609494 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" event={"ID":"87a93ec2-8a4a-4bb4-9f65-1265d565d052","Type":"ContainerDied","Data":"923c7f782026dcf1f69cbd8aafac33085cc6b3afed33d78395234ad8cf778ffa"} Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.063898 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.145268 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqr9r\" (UniqueName: \"kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r\") pod \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.145316 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume\") pod \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.146332 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume" (OuterVolumeSpecName: "config-volume") pod "87a93ec2-8a4a-4bb4-9f65-1265d565d052" (UID: "87a93ec2-8a4a-4bb4-9f65-1265d565d052"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.151808 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r" (OuterVolumeSpecName: "kube-api-access-gqr9r") pod "87a93ec2-8a4a-4bb4-9f65-1265d565d052" (UID: "87a93ec2-8a4a-4bb4-9f65-1265d565d052"). InnerVolumeSpecName "kube-api-access-gqr9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.246465 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume\") pod \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\" (UID: \"87a93ec2-8a4a-4bb4-9f65-1265d565d052\") " Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.247182 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqr9r\" (UniqueName: \"kubernetes.io/projected/87a93ec2-8a4a-4bb4-9f65-1265d565d052-kube-api-access-gqr9r\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.247270 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87a93ec2-8a4a-4bb4-9f65-1265d565d052-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.249416 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "87a93ec2-8a4a-4bb4-9f65-1265d565d052" (UID: "87a93ec2-8a4a-4bb4-9f65-1265d565d052"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.349396 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/87a93ec2-8a4a-4bb4-9f65-1265d565d052-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.804585 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.804493 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb" event={"ID":"87a93ec2-8a4a-4bb4-9f65-1265d565d052","Type":"ContainerDied","Data":"1330a302434097ea913c3789dba8d29b2229d846ee0470af806540508866f4d4"} Mar 08 01:15:07 crc kubenswrapper[4762]: I0308 01:15:07.805529 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1330a302434097ea913c3789dba8d29b2229d846ee0470af806540508866f4d4" Mar 08 01:15:08 crc kubenswrapper[4762]: I0308 01:15:08.159845 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx"] Mar 08 01:15:08 crc kubenswrapper[4762]: I0308 01:15:08.173094 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548830-8d7wx"] Mar 08 01:15:09 crc kubenswrapper[4762]: I0308 01:15:09.285502 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df7f7a0c-caca-4535-8a12-c6dbca79e550" path="/var/lib/kubelet/pods/df7f7a0c-caca-4535-8a12-c6dbca79e550/volumes" Mar 08 01:15:09 crc kubenswrapper[4762]: I0308 01:15:09.825959 4762 generic.go:334] "Generic (PLEG): container finished" podID="7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" containerID="85bae656308f833998be52a63f088d8da642597b31502886ac52d00c25462dec" exitCode=0 Mar 08 01:15:09 crc kubenswrapper[4762]: I0308 01:15:09.826016 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" event={"ID":"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a","Type":"ContainerDied","Data":"85bae656308f833998be52a63f088d8da642597b31502886ac52d00c25462dec"} Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.379223 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.569337 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph\") pod \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.569463 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory\") pod \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.569552 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam\") pod \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.569798 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle\") pod \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.569857 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrt2q\" (UniqueName: \"kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q\") pod \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\" (UID: \"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a\") " Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.575692 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph" (OuterVolumeSpecName: "ceph") pod "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" (UID: "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.576431 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" (UID: "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.581197 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q" (OuterVolumeSpecName: "kube-api-access-wrt2q") pod "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" (UID: "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a"). InnerVolumeSpecName "kube-api-access-wrt2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.630311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" (UID: "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.630751 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory" (OuterVolumeSpecName: "inventory") pod "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" (UID: "7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.673404 4762 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.674087 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrt2q\" (UniqueName: \"kubernetes.io/projected/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-kube-api-access-wrt2q\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.674258 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.674384 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.674516 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.843968 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" event={"ID":"7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a","Type":"ContainerDied","Data":"0158a9b0edf75fe7ffaf9f9358b3b724adda4c443f38b1f891db21c5ced3d500"} Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.844279 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0158a9b0edf75fe7ffaf9f9358b3b724adda4c443f38b1f891db21c5ced3d500" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.844066 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.997852 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs"] Mar 08 01:15:11 crc kubenswrapper[4762]: E0308 01:15:11.998307 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.998324 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:11 crc kubenswrapper[4762]: E0308 01:15:11.998333 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87a93ec2-8a4a-4bb4-9f65-1265d565d052" containerName="collect-profiles" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.998340 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="87a93ec2-8a4a-4bb4-9f65-1265d565d052" containerName="collect-profiles" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.998572 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="87a93ec2-8a4a-4bb4-9f65-1265d565d052" containerName="collect-profiles" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.998604 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:11 crc kubenswrapper[4762]: I0308 01:15:11.999291 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.003686 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.003949 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.006743 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs"] Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.007326 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.007364 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.007442 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.183274 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.183980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.184213 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.184332 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7dv8\" (UniqueName: \"kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.287701 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.287777 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.287891 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.287954 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7dv8\" (UniqueName: \"kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.293984 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.293993 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.296335 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.310190 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7dv8\" (UniqueName: \"kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.319140 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:12 crc kubenswrapper[4762]: I0308 01:15:12.889249 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs"] Mar 08 01:15:13 crc kubenswrapper[4762]: I0308 01:15:13.869643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" event={"ID":"60632753-693e-4352-b416-3b64699b7e67","Type":"ContainerStarted","Data":"27fb6b19f815432313b4b875bc643a06e686c46ecc4b18bbc5a7021b798ebc7c"} Mar 08 01:15:13 crc kubenswrapper[4762]: I0308 01:15:13.870046 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" event={"ID":"60632753-693e-4352-b416-3b64699b7e67","Type":"ContainerStarted","Data":"460580d5a5952e078025844e237dd778b4204f07179bbb024db3a8d89f987204"} Mar 08 01:15:13 crc kubenswrapper[4762]: I0308 01:15:13.907089 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" podStartSLOduration=2.392038227 podStartE2EDuration="2.907059989s" podCreationTimestamp="2026-03-08 01:15:11 +0000 UTC" firstStartedPulling="2026-03-08 01:15:12.891055548 +0000 UTC m=+3134.365199902" lastFinishedPulling="2026-03-08 01:15:13.40607731 +0000 UTC m=+3134.880221664" observedRunningTime="2026-03-08 01:15:13.89190585 +0000 UTC m=+3135.366050234" watchObservedRunningTime="2026-03-08 01:15:13.907059989 +0000 UTC m=+3135.381204373" Mar 08 01:15:31 crc kubenswrapper[4762]: I0308 01:15:31.179976 4762 scope.go:117] "RemoveContainer" containerID="a01a097c4a18ad1ce113dfab0995ee03c19cef1f2ef01ee694984dab715e609b" Mar 08 01:15:42 crc kubenswrapper[4762]: I0308 01:15:42.252134 4762 generic.go:334] "Generic (PLEG): container finished" podID="60632753-693e-4352-b416-3b64699b7e67" containerID="27fb6b19f815432313b4b875bc643a06e686c46ecc4b18bbc5a7021b798ebc7c" exitCode=0 Mar 08 01:15:42 crc kubenswrapper[4762]: I0308 01:15:42.252235 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" event={"ID":"60632753-693e-4352-b416-3b64699b7e67","Type":"ContainerDied","Data":"27fb6b19f815432313b4b875bc643a06e686c46ecc4b18bbc5a7021b798ebc7c"} Mar 08 01:15:42 crc kubenswrapper[4762]: I0308 01:15:42.852187 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:15:42 crc kubenswrapper[4762]: I0308 01:15:42.852245 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.788891 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.897088 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam\") pod \"60632753-693e-4352-b416-3b64699b7e67\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.897170 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7dv8\" (UniqueName: \"kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8\") pod \"60632753-693e-4352-b416-3b64699b7e67\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.897230 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph\") pod \"60632753-693e-4352-b416-3b64699b7e67\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.897274 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory\") pod \"60632753-693e-4352-b416-3b64699b7e67\" (UID: \"60632753-693e-4352-b416-3b64699b7e67\") " Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.909998 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph" (OuterVolumeSpecName: "ceph") pod "60632753-693e-4352-b416-3b64699b7e67" (UID: "60632753-693e-4352-b416-3b64699b7e67"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.910061 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8" (OuterVolumeSpecName: "kube-api-access-p7dv8") pod "60632753-693e-4352-b416-3b64699b7e67" (UID: "60632753-693e-4352-b416-3b64699b7e67"). InnerVolumeSpecName "kube-api-access-p7dv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.933311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "60632753-693e-4352-b416-3b64699b7e67" (UID: "60632753-693e-4352-b416-3b64699b7e67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.948208 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory" (OuterVolumeSpecName: "inventory") pod "60632753-693e-4352-b416-3b64699b7e67" (UID: "60632753-693e-4352-b416-3b64699b7e67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.999817 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.999855 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7dv8\" (UniqueName: \"kubernetes.io/projected/60632753-693e-4352-b416-3b64699b7e67-kube-api-access-p7dv8\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.999869 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:43 crc kubenswrapper[4762]: I0308 01:15:43.999881 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/60632753-693e-4352-b416-3b64699b7e67-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.280918 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" event={"ID":"60632753-693e-4352-b416-3b64699b7e67","Type":"ContainerDied","Data":"460580d5a5952e078025844e237dd778b4204f07179bbb024db3a8d89f987204"} Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.280963 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.280969 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460580d5a5952e078025844e237dd778b4204f07179bbb024db3a8d89f987204" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.439457 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg"] Mar 08 01:15:44 crc kubenswrapper[4762]: E0308 01:15:44.440013 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60632753-693e-4352-b416-3b64699b7e67" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.440043 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="60632753-693e-4352-b416-3b64699b7e67" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.440363 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="60632753-693e-4352-b416-3b64699b7e67" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.441485 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.444711 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.447261 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.448214 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.448641 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.450204 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.454075 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg"] Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.512252 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.512333 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.512378 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.512905 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xrp\" (UniqueName: \"kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.615600 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xrp\" (UniqueName: \"kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.615710 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.615783 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.615825 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.622217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.622826 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.625195 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.637599 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xrp\" (UniqueName: \"kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-b25bg\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:44 crc kubenswrapper[4762]: I0308 01:15:44.769434 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:45 crc kubenswrapper[4762]: I0308 01:15:45.341107 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg"] Mar 08 01:15:46 crc kubenswrapper[4762]: I0308 01:15:46.309029 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" event={"ID":"7805ca0f-f81b-450a-b661-c8599dd9e719","Type":"ContainerStarted","Data":"03b70e0f802ae24dbd27b316286a80c6c384792a1870bb234eac981b72819d94"} Mar 08 01:15:46 crc kubenswrapper[4762]: I0308 01:15:46.309465 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" event={"ID":"7805ca0f-f81b-450a-b661-c8599dd9e719","Type":"ContainerStarted","Data":"d5452ca7385e14a8dfcf249e8acc37865f955a290c4a7b4885ee11eada65459a"} Mar 08 01:15:46 crc kubenswrapper[4762]: I0308 01:15:46.347492 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" podStartSLOduration=1.936547693 podStartE2EDuration="2.347466392s" podCreationTimestamp="2026-03-08 01:15:44 +0000 UTC" firstStartedPulling="2026-03-08 01:15:45.33643562 +0000 UTC m=+3166.810579974" lastFinishedPulling="2026-03-08 01:15:45.747354319 +0000 UTC m=+3167.221498673" observedRunningTime="2026-03-08 01:15:46.332615053 +0000 UTC m=+3167.806759437" watchObservedRunningTime="2026-03-08 01:15:46.347466392 +0000 UTC m=+3167.821610776" Mar 08 01:15:52 crc kubenswrapper[4762]: I0308 01:15:52.389905 4762 generic.go:334] "Generic (PLEG): container finished" podID="7805ca0f-f81b-450a-b661-c8599dd9e719" containerID="03b70e0f802ae24dbd27b316286a80c6c384792a1870bb234eac981b72819d94" exitCode=0 Mar 08 01:15:52 crc kubenswrapper[4762]: I0308 01:15:52.390491 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" event={"ID":"7805ca0f-f81b-450a-b661-c8599dd9e719","Type":"ContainerDied","Data":"03b70e0f802ae24dbd27b316286a80c6c384792a1870bb234eac981b72819d94"} Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.914821 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.969342 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph\") pod \"7805ca0f-f81b-450a-b661-c8599dd9e719\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.969384 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam\") pod \"7805ca0f-f81b-450a-b661-c8599dd9e719\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.969403 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory\") pod \"7805ca0f-f81b-450a-b661-c8599dd9e719\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.969442 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7xrp\" (UniqueName: \"kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp\") pod \"7805ca0f-f81b-450a-b661-c8599dd9e719\" (UID: \"7805ca0f-f81b-450a-b661-c8599dd9e719\") " Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.982110 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph" (OuterVolumeSpecName: "ceph") pod "7805ca0f-f81b-450a-b661-c8599dd9e719" (UID: "7805ca0f-f81b-450a-b661-c8599dd9e719"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.982195 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp" (OuterVolumeSpecName: "kube-api-access-n7xrp") pod "7805ca0f-f81b-450a-b661-c8599dd9e719" (UID: "7805ca0f-f81b-450a-b661-c8599dd9e719"). InnerVolumeSpecName "kube-api-access-n7xrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:15:53 crc kubenswrapper[4762]: I0308 01:15:53.998277 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory" (OuterVolumeSpecName: "inventory") pod "7805ca0f-f81b-450a-b661-c8599dd9e719" (UID: "7805ca0f-f81b-450a-b661-c8599dd9e719"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.004332 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7805ca0f-f81b-450a-b661-c8599dd9e719" (UID: "7805ca0f-f81b-450a-b661-c8599dd9e719"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.072632 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.072685 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7xrp\" (UniqueName: \"kubernetes.io/projected/7805ca0f-f81b-450a-b661-c8599dd9e719-kube-api-access-n7xrp\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.072706 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.072724 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7805ca0f-f81b-450a-b661-c8599dd9e719-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.411787 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" event={"ID":"7805ca0f-f81b-450a-b661-c8599dd9e719","Type":"ContainerDied","Data":"d5452ca7385e14a8dfcf249e8acc37865f955a290c4a7b4885ee11eada65459a"} Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.412120 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5452ca7385e14a8dfcf249e8acc37865f955a290c4a7b4885ee11eada65459a" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.411846 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-b25bg" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.514522 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp"] Mar 08 01:15:54 crc kubenswrapper[4762]: E0308 01:15:54.515058 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7805ca0f-f81b-450a-b661-c8599dd9e719" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.515083 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="7805ca0f-f81b-450a-b661-c8599dd9e719" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.515369 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="7805ca0f-f81b-450a-b661-c8599dd9e719" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.516255 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.518414 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.518490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.518678 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.519003 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.520949 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.535671 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp"] Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.683602 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.683689 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.683798 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc77q\" (UniqueName: \"kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.684846 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.786854 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc77q\" (UniqueName: \"kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.786952 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.787230 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.787358 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.793029 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.793615 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.802414 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.816217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc77q\" (UniqueName: \"kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-n6jzp\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:54 crc kubenswrapper[4762]: I0308 01:15:54.849641 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:15:55 crc kubenswrapper[4762]: I0308 01:15:55.438628 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp"] Mar 08 01:15:56 crc kubenswrapper[4762]: I0308 01:15:56.448492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" event={"ID":"648ab410-6f12-42c0-83de-35e6a44712b1","Type":"ContainerStarted","Data":"893d5d9ac8e5935d8e6de03829358632b89c9c950081fba7877e06337a01cc09"} Mar 08 01:15:56 crc kubenswrapper[4762]: I0308 01:15:56.449144 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" event={"ID":"648ab410-6f12-42c0-83de-35e6a44712b1","Type":"ContainerStarted","Data":"8072c22c0c6342872a6b83bf94eb01818a0c8db0711802fb0a460fa0c0b35e0a"} Mar 08 01:15:56 crc kubenswrapper[4762]: I0308 01:15:56.472436 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" podStartSLOduration=1.814237528 podStartE2EDuration="2.472411813s" podCreationTimestamp="2026-03-08 01:15:54 +0000 UTC" firstStartedPulling="2026-03-08 01:15:55.445447985 +0000 UTC m=+3176.919592349" lastFinishedPulling="2026-03-08 01:15:56.10362225 +0000 UTC m=+3177.577766634" observedRunningTime="2026-03-08 01:15:56.461875449 +0000 UTC m=+3177.936019803" watchObservedRunningTime="2026-03-08 01:15:56.472411813 +0000 UTC m=+3177.946556157" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.188820 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548876-bcx8c"] Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.191566 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.194341 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.194911 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.194980 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.206644 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548876-bcx8c"] Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.216986 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqp2f\" (UniqueName: \"kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f\") pod \"auto-csr-approver-29548876-bcx8c\" (UID: \"583cac29-3bb2-4e52-802d-288ba8775619\") " pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.318674 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqp2f\" (UniqueName: \"kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f\") pod \"auto-csr-approver-29548876-bcx8c\" (UID: \"583cac29-3bb2-4e52-802d-288ba8775619\") " pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.349463 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqp2f\" (UniqueName: \"kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f\") pod \"auto-csr-approver-29548876-bcx8c\" (UID: \"583cac29-3bb2-4e52-802d-288ba8775619\") " pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:00 crc kubenswrapper[4762]: I0308 01:16:00.520675 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:01 crc kubenswrapper[4762]: I0308 01:16:01.029507 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548876-bcx8c"] Mar 08 01:16:01 crc kubenswrapper[4762]: I0308 01:16:01.527561 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" event={"ID":"583cac29-3bb2-4e52-802d-288ba8775619","Type":"ContainerStarted","Data":"e794bf93fd97f342427fff29cab0b859f50f19a0b714bd4b4c31b8106123be30"} Mar 08 01:16:02 crc kubenswrapper[4762]: I0308 01:16:02.541782 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" event={"ID":"583cac29-3bb2-4e52-802d-288ba8775619","Type":"ContainerStarted","Data":"23a2f6264104b4c44953adfe568ec3a7cbe1f07634a04f961a3aa5aa6a5c2560"} Mar 08 01:16:02 crc kubenswrapper[4762]: I0308 01:16:02.565330 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" podStartSLOduration=1.466783564 podStartE2EDuration="2.565303309s" podCreationTimestamp="2026-03-08 01:16:00 +0000 UTC" firstStartedPulling="2026-03-08 01:16:01.033784181 +0000 UTC m=+3182.507928535" lastFinishedPulling="2026-03-08 01:16:02.132303936 +0000 UTC m=+3183.606448280" observedRunningTime="2026-03-08 01:16:02.561933956 +0000 UTC m=+3184.036078330" watchObservedRunningTime="2026-03-08 01:16:02.565303309 +0000 UTC m=+3184.039447673" Mar 08 01:16:03 crc kubenswrapper[4762]: I0308 01:16:03.552536 4762 generic.go:334] "Generic (PLEG): container finished" podID="583cac29-3bb2-4e52-802d-288ba8775619" containerID="23a2f6264104b4c44953adfe568ec3a7cbe1f07634a04f961a3aa5aa6a5c2560" exitCode=0 Mar 08 01:16:03 crc kubenswrapper[4762]: I0308 01:16:03.552607 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" event={"ID":"583cac29-3bb2-4e52-802d-288ba8775619","Type":"ContainerDied","Data":"23a2f6264104b4c44953adfe568ec3a7cbe1f07634a04f961a3aa5aa6a5c2560"} Mar 08 01:16:04 crc kubenswrapper[4762]: I0308 01:16:04.995821 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.127351 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqp2f\" (UniqueName: \"kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f\") pod \"583cac29-3bb2-4e52-802d-288ba8775619\" (UID: \"583cac29-3bb2-4e52-802d-288ba8775619\") " Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.137411 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f" (OuterVolumeSpecName: "kube-api-access-sqp2f") pod "583cac29-3bb2-4e52-802d-288ba8775619" (UID: "583cac29-3bb2-4e52-802d-288ba8775619"). InnerVolumeSpecName "kube-api-access-sqp2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.230856 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqp2f\" (UniqueName: \"kubernetes.io/projected/583cac29-3bb2-4e52-802d-288ba8775619-kube-api-access-sqp2f\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.572860 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" event={"ID":"583cac29-3bb2-4e52-802d-288ba8775619","Type":"ContainerDied","Data":"e794bf93fd97f342427fff29cab0b859f50f19a0b714bd4b4c31b8106123be30"} Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.573254 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e794bf93fd97f342427fff29cab0b859f50f19a0b714bd4b4c31b8106123be30" Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.572958 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548876-bcx8c" Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.641975 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548870-6zhvp"] Mar 08 01:16:05 crc kubenswrapper[4762]: I0308 01:16:05.653935 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548870-6zhvp"] Mar 08 01:16:07 crc kubenswrapper[4762]: I0308 01:16:07.276930 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9b69cae-0a10-4319-a039-0332adac8b95" path="/var/lib/kubelet/pods/f9b69cae-0a10-4319-a039-0332adac8b95/volumes" Mar 08 01:16:12 crc kubenswrapper[4762]: I0308 01:16:12.852462 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:16:12 crc kubenswrapper[4762]: I0308 01:16:12.853164 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:16:31 crc kubenswrapper[4762]: I0308 01:16:31.268499 4762 scope.go:117] "RemoveContainer" containerID="0d6633dd6ff33a751ea9966e9d56a58902f1948db5ec285f4d55fbd5e940d3e0" Mar 08 01:16:40 crc kubenswrapper[4762]: I0308 01:16:40.988010 4762 generic.go:334] "Generic (PLEG): container finished" podID="648ab410-6f12-42c0-83de-35e6a44712b1" containerID="893d5d9ac8e5935d8e6de03829358632b89c9c950081fba7877e06337a01cc09" exitCode=0 Mar 08 01:16:40 crc kubenswrapper[4762]: I0308 01:16:40.988047 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" event={"ID":"648ab410-6f12-42c0-83de-35e6a44712b1","Type":"ContainerDied","Data":"893d5d9ac8e5935d8e6de03829358632b89c9c950081fba7877e06337a01cc09"} Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.562289 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.695740 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph\") pod \"648ab410-6f12-42c0-83de-35e6a44712b1\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.695794 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam\") pod \"648ab410-6f12-42c0-83de-35e6a44712b1\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.695853 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory\") pod \"648ab410-6f12-42c0-83de-35e6a44712b1\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.695879 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc77q\" (UniqueName: \"kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q\") pod \"648ab410-6f12-42c0-83de-35e6a44712b1\" (UID: \"648ab410-6f12-42c0-83de-35e6a44712b1\") " Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.701447 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q" (OuterVolumeSpecName: "kube-api-access-wc77q") pod "648ab410-6f12-42c0-83de-35e6a44712b1" (UID: "648ab410-6f12-42c0-83de-35e6a44712b1"). InnerVolumeSpecName "kube-api-access-wc77q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.704866 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph" (OuterVolumeSpecName: "ceph") pod "648ab410-6f12-42c0-83de-35e6a44712b1" (UID: "648ab410-6f12-42c0-83de-35e6a44712b1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.725104 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "648ab410-6f12-42c0-83de-35e6a44712b1" (UID: "648ab410-6f12-42c0-83de-35e6a44712b1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.738608 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory" (OuterVolumeSpecName: "inventory") pod "648ab410-6f12-42c0-83de-35e6a44712b1" (UID: "648ab410-6f12-42c0-83de-35e6a44712b1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.801589 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.801642 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.801662 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/648ab410-6f12-42c0-83de-35e6a44712b1-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.801682 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc77q\" (UniqueName: \"kubernetes.io/projected/648ab410-6f12-42c0-83de-35e6a44712b1-kube-api-access-wc77q\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.851726 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.851803 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.851841 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.852393 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:16:42 crc kubenswrapper[4762]: I0308 01:16:42.852453 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" gracePeriod=600 Mar 08 01:16:42 crc kubenswrapper[4762]: E0308 01:16:42.984748 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.013390 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" event={"ID":"648ab410-6f12-42c0-83de-35e6a44712b1","Type":"ContainerDied","Data":"8072c22c0c6342872a6b83bf94eb01818a0c8db0711802fb0a460fa0c0b35e0a"} Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.013436 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-n6jzp" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.013443 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8072c22c0c6342872a6b83bf94eb01818a0c8db0711802fb0a460fa0c0b35e0a" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.016837 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" exitCode=0 Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.016899 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499"} Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.016958 4762 scope.go:117] "RemoveContainer" containerID="811c2366d1dd066052dabcf66e6b8dd816dc127e5560d3cea5c3c417cbba5630" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.018143 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:16:43 crc kubenswrapper[4762]: E0308 01:16:43.018734 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.149024 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82"] Mar 08 01:16:43 crc kubenswrapper[4762]: E0308 01:16:43.149524 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648ab410-6f12-42c0-83de-35e6a44712b1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.149541 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="648ab410-6f12-42c0-83de-35e6a44712b1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:43 crc kubenswrapper[4762]: E0308 01:16:43.149566 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="583cac29-3bb2-4e52-802d-288ba8775619" containerName="oc" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.149572 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="583cac29-3bb2-4e52-802d-288ba8775619" containerName="oc" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.149845 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="648ab410-6f12-42c0-83de-35e6a44712b1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.149869 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="583cac29-3bb2-4e52-802d-288ba8775619" containerName="oc" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.150606 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.159450 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.159505 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.159458 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.159839 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.160093 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.170874 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82"] Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.318297 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.318384 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.318485 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.318536 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmfcn\" (UniqueName: \"kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.421389 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.421655 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.421936 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.422094 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmfcn\" (UniqueName: \"kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.431027 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.431213 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.431804 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.451061 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmfcn\" (UniqueName: \"kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:43 crc kubenswrapper[4762]: I0308 01:16:43.482905 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:44 crc kubenswrapper[4762]: I0308 01:16:44.133841 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:16:44 crc kubenswrapper[4762]: I0308 01:16:44.141337 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82"] Mar 08 01:16:45 crc kubenswrapper[4762]: I0308 01:16:45.035059 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" event={"ID":"c0a2baeb-7f9c-47e2-b201-cd7ff3641547","Type":"ContainerStarted","Data":"5e03c4771bcb7db5460fdc9a46dacee132459913eb0d4883cba7034fbf6f52f1"} Mar 08 01:16:45 crc kubenswrapper[4762]: I0308 01:16:45.035643 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" event={"ID":"c0a2baeb-7f9c-47e2-b201-cd7ff3641547","Type":"ContainerStarted","Data":"762b03c4086368a7b994c870b54a7b3dce11daa33695bd9274f5ebedf945670f"} Mar 08 01:16:45 crc kubenswrapper[4762]: I0308 01:16:45.056277 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" podStartSLOduration=1.5253964450000002 podStartE2EDuration="2.056260479s" podCreationTimestamp="2026-03-08 01:16:43 +0000 UTC" firstStartedPulling="2026-03-08 01:16:44.133588516 +0000 UTC m=+3225.607732860" lastFinishedPulling="2026-03-08 01:16:44.66445252 +0000 UTC m=+3226.138596894" observedRunningTime="2026-03-08 01:16:45.054524606 +0000 UTC m=+3226.528668950" watchObservedRunningTime="2026-03-08 01:16:45.056260479 +0000 UTC m=+3226.530404813" Mar 08 01:16:50 crc kubenswrapper[4762]: I0308 01:16:50.088265 4762 generic.go:334] "Generic (PLEG): container finished" podID="c0a2baeb-7f9c-47e2-b201-cd7ff3641547" containerID="5e03c4771bcb7db5460fdc9a46dacee132459913eb0d4883cba7034fbf6f52f1" exitCode=0 Mar 08 01:16:50 crc kubenswrapper[4762]: I0308 01:16:50.088411 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" event={"ID":"c0a2baeb-7f9c-47e2-b201-cd7ff3641547","Type":"ContainerDied","Data":"5e03c4771bcb7db5460fdc9a46dacee132459913eb0d4883cba7034fbf6f52f1"} Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.638539 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.727796 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam\") pod \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.727907 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph\") pod \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.727989 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmfcn\" (UniqueName: \"kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn\") pod \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.728132 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory\") pod \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\" (UID: \"c0a2baeb-7f9c-47e2-b201-cd7ff3641547\") " Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.733172 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn" (OuterVolumeSpecName: "kube-api-access-cmfcn") pod "c0a2baeb-7f9c-47e2-b201-cd7ff3641547" (UID: "c0a2baeb-7f9c-47e2-b201-cd7ff3641547"). InnerVolumeSpecName "kube-api-access-cmfcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.740000 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph" (OuterVolumeSpecName: "ceph") pod "c0a2baeb-7f9c-47e2-b201-cd7ff3641547" (UID: "c0a2baeb-7f9c-47e2-b201-cd7ff3641547"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.755324 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c0a2baeb-7f9c-47e2-b201-cd7ff3641547" (UID: "c0a2baeb-7f9c-47e2-b201-cd7ff3641547"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.778461 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory" (OuterVolumeSpecName: "inventory") pod "c0a2baeb-7f9c-47e2-b201-cd7ff3641547" (UID: "c0a2baeb-7f9c-47e2-b201-cd7ff3641547"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.831556 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmfcn\" (UniqueName: \"kubernetes.io/projected/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-kube-api-access-cmfcn\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.831584 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.831595 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:51 crc kubenswrapper[4762]: I0308 01:16:51.831605 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/c0a2baeb-7f9c-47e2-b201-cd7ff3641547-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.106522 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" event={"ID":"c0a2baeb-7f9c-47e2-b201-cd7ff3641547","Type":"ContainerDied","Data":"762b03c4086368a7b994c870b54a7b3dce11daa33695bd9274f5ebedf945670f"} Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.106930 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="762b03c4086368a7b994c870b54a7b3dce11daa33695bd9274f5ebedf945670f" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.106572 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.261279 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc"] Mar 08 01:16:52 crc kubenswrapper[4762]: E0308 01:16:52.261846 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0a2baeb-7f9c-47e2-b201-cd7ff3641547" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.261867 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0a2baeb-7f9c-47e2-b201-cd7ff3641547" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.262174 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0a2baeb-7f9c-47e2-b201-cd7ff3641547" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.263048 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.266754 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.268131 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.268221 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.268337 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.269154 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.289460 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc"] Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.341963 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.342145 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.342299 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr72t\" (UniqueName: \"kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.342399 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.445386 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.445548 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.445754 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr72t\" (UniqueName: \"kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.445981 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.449729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.450933 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.454228 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.462708 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr72t\" (UniqueName: \"kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:52 crc kubenswrapper[4762]: I0308 01:16:52.586380 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:16:53 crc kubenswrapper[4762]: I0308 01:16:53.195755 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc"] Mar 08 01:16:54 crc kubenswrapper[4762]: I0308 01:16:54.131475 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" event={"ID":"64ea8060-e683-425e-8ae0-0250d59a2c46","Type":"ContainerStarted","Data":"db6ea6926dc028679b2e9d6470c35dde2402f63e82b0a971c66c71aec98ba936"} Mar 08 01:16:54 crc kubenswrapper[4762]: I0308 01:16:54.131866 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" event={"ID":"64ea8060-e683-425e-8ae0-0250d59a2c46","Type":"ContainerStarted","Data":"73b2ca51269526726190d2ec780491df3712f6c8dd32261869e7b1022877ca06"} Mar 08 01:16:54 crc kubenswrapper[4762]: I0308 01:16:54.159205 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" podStartSLOduration=1.633450597 podStartE2EDuration="2.159185684s" podCreationTimestamp="2026-03-08 01:16:52 +0000 UTC" firstStartedPulling="2026-03-08 01:16:53.202983989 +0000 UTC m=+3234.677128363" lastFinishedPulling="2026-03-08 01:16:53.728719096 +0000 UTC m=+3235.202863450" observedRunningTime="2026-03-08 01:16:54.151404714 +0000 UTC m=+3235.625549068" watchObservedRunningTime="2026-03-08 01:16:54.159185684 +0000 UTC m=+3235.633330028" Mar 08 01:16:54 crc kubenswrapper[4762]: I0308 01:16:54.264275 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:16:54 crc kubenswrapper[4762]: E0308 01:16:54.264677 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:17:05 crc kubenswrapper[4762]: I0308 01:17:05.264535 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:17:05 crc kubenswrapper[4762]: E0308 01:17:05.265626 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:17:20 crc kubenswrapper[4762]: I0308 01:17:20.263715 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:17:20 crc kubenswrapper[4762]: E0308 01:17:20.264695 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:17:34 crc kubenswrapper[4762]: I0308 01:17:34.265830 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:17:34 crc kubenswrapper[4762]: E0308 01:17:34.267026 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:17:46 crc kubenswrapper[4762]: I0308 01:17:46.782538 4762 generic.go:334] "Generic (PLEG): container finished" podID="64ea8060-e683-425e-8ae0-0250d59a2c46" containerID="db6ea6926dc028679b2e9d6470c35dde2402f63e82b0a971c66c71aec98ba936" exitCode=0 Mar 08 01:17:46 crc kubenswrapper[4762]: I0308 01:17:46.783279 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" event={"ID":"64ea8060-e683-425e-8ae0-0250d59a2c46","Type":"ContainerDied","Data":"db6ea6926dc028679b2e9d6470c35dde2402f63e82b0a971c66c71aec98ba936"} Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.412940 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.531075 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph\") pod \"64ea8060-e683-425e-8ae0-0250d59a2c46\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.531401 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr72t\" (UniqueName: \"kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t\") pod \"64ea8060-e683-425e-8ae0-0250d59a2c46\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.531516 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam\") pod \"64ea8060-e683-425e-8ae0-0250d59a2c46\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.531619 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory\") pod \"64ea8060-e683-425e-8ae0-0250d59a2c46\" (UID: \"64ea8060-e683-425e-8ae0-0250d59a2c46\") " Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.538311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph" (OuterVolumeSpecName: "ceph") pod "64ea8060-e683-425e-8ae0-0250d59a2c46" (UID: "64ea8060-e683-425e-8ae0-0250d59a2c46"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.539790 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t" (OuterVolumeSpecName: "kube-api-access-fr72t") pod "64ea8060-e683-425e-8ae0-0250d59a2c46" (UID: "64ea8060-e683-425e-8ae0-0250d59a2c46"). InnerVolumeSpecName "kube-api-access-fr72t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.573124 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64ea8060-e683-425e-8ae0-0250d59a2c46" (UID: "64ea8060-e683-425e-8ae0-0250d59a2c46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.573152 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory" (OuterVolumeSpecName: "inventory") pod "64ea8060-e683-425e-8ae0-0250d59a2c46" (UID: "64ea8060-e683-425e-8ae0-0250d59a2c46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.633932 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr72t\" (UniqueName: \"kubernetes.io/projected/64ea8060-e683-425e-8ae0-0250d59a2c46-kube-api-access-fr72t\") on node \"crc\" DevicePath \"\"" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.633970 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.633986 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.633997 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/64ea8060-e683-425e-8ae0-0250d59a2c46-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.809407 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" event={"ID":"64ea8060-e683-425e-8ae0-0250d59a2c46","Type":"ContainerDied","Data":"73b2ca51269526726190d2ec780491df3712f6c8dd32261869e7b1022877ca06"} Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.809464 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73b2ca51269526726190d2ec780491df3712f6c8dd32261869e7b1022877ca06" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.809531 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.907090 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d7dkx"] Mar 08 01:17:48 crc kubenswrapper[4762]: E0308 01:17:48.907673 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ea8060-e683-425e-8ae0-0250d59a2c46" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.907697 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ea8060-e683-425e-8ae0-0250d59a2c46" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.907966 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ea8060-e683-425e-8ae0-0250d59a2c46" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.908869 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.912012 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.912120 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.913230 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.913677 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.914166 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:17:48 crc kubenswrapper[4762]: I0308 01:17:48.917330 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d7dkx"] Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.041326 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxh7\" (UniqueName: \"kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.041391 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.041520 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.041573 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.143529 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.143933 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.144721 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxh7\" (UniqueName: \"kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.144796 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.148987 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.149260 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.154441 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.174039 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxh7\" (UniqueName: \"kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7\") pod \"ssh-known-hosts-edpm-deployment-d7dkx\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.239291 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.275487 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:17:49 crc kubenswrapper[4762]: E0308 01:17:49.276018 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:17:49 crc kubenswrapper[4762]: I0308 01:17:49.872655 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-d7dkx"] Mar 08 01:17:50 crc kubenswrapper[4762]: I0308 01:17:50.836362 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" event={"ID":"71462aef-16e4-4c41-a3ad-11664b64443d","Type":"ContainerStarted","Data":"66002c6f461b25f1e1aa3222dab08a4c2d3254879869e3d3429c2f5a3cc728a5"} Mar 08 01:17:51 crc kubenswrapper[4762]: I0308 01:17:51.850439 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" event={"ID":"71462aef-16e4-4c41-a3ad-11664b64443d","Type":"ContainerStarted","Data":"5b43ccff72037691663e8090cfe6155d5a9105b6d4ccb85efeb73e4315d841b6"} Mar 08 01:17:51 crc kubenswrapper[4762]: I0308 01:17:51.878487 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" podStartSLOduration=3.253244658 podStartE2EDuration="3.878464155s" podCreationTimestamp="2026-03-08 01:17:48 +0000 UTC" firstStartedPulling="2026-03-08 01:17:49.880733662 +0000 UTC m=+3291.354878016" lastFinishedPulling="2026-03-08 01:17:50.505953129 +0000 UTC m=+3291.980097513" observedRunningTime="2026-03-08 01:17:51.870968964 +0000 UTC m=+3293.345113348" watchObservedRunningTime="2026-03-08 01:17:51.878464155 +0000 UTC m=+3293.352608509" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.182813 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548878-j8mv8"] Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.185215 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.188101 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.192007 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.192214 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.195108 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548878-j8mv8"] Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.318561 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7xlv\" (UniqueName: \"kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv\") pod \"auto-csr-approver-29548878-j8mv8\" (UID: \"a0de1641-d236-4654-b681-56222a8d26aa\") " pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.420736 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7xlv\" (UniqueName: \"kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv\") pod \"auto-csr-approver-29548878-j8mv8\" (UID: \"a0de1641-d236-4654-b681-56222a8d26aa\") " pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.444649 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7xlv\" (UniqueName: \"kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv\") pod \"auto-csr-approver-29548878-j8mv8\" (UID: \"a0de1641-d236-4654-b681-56222a8d26aa\") " pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:00 crc kubenswrapper[4762]: I0308 01:18:00.519383 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:01 crc kubenswrapper[4762]: I0308 01:18:01.018681 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548878-j8mv8"] Mar 08 01:18:01 crc kubenswrapper[4762]: I0308 01:18:01.993651 4762 generic.go:334] "Generic (PLEG): container finished" podID="71462aef-16e4-4c41-a3ad-11664b64443d" containerID="5b43ccff72037691663e8090cfe6155d5a9105b6d4ccb85efeb73e4315d841b6" exitCode=0 Mar 08 01:18:01 crc kubenswrapper[4762]: I0308 01:18:01.994345 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" event={"ID":"71462aef-16e4-4c41-a3ad-11664b64443d","Type":"ContainerDied","Data":"5b43ccff72037691663e8090cfe6155d5a9105b6d4ccb85efeb73e4315d841b6"} Mar 08 01:18:01 crc kubenswrapper[4762]: I0308 01:18:01.996387 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" event={"ID":"a0de1641-d236-4654-b681-56222a8d26aa","Type":"ContainerStarted","Data":"89cf39cb86fc69bd09dcaa7e37588b3e405a50dcf767b9f7ad8cdf89844b8ade"} Mar 08 01:18:02 crc kubenswrapper[4762]: I0308 01:18:02.264993 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:18:02 crc kubenswrapper[4762]: E0308 01:18:02.265425 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.007327 4762 generic.go:334] "Generic (PLEG): container finished" podID="a0de1641-d236-4654-b681-56222a8d26aa" containerID="9ea49bdc202fd68cf5b395dbfe9d8bb5fa8c96b77286cd15c85f1911407fd1ce" exitCode=0 Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.007380 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" event={"ID":"a0de1641-d236-4654-b681-56222a8d26aa","Type":"ContainerDied","Data":"9ea49bdc202fd68cf5b395dbfe9d8bb5fa8c96b77286cd15c85f1911407fd1ce"} Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.563409 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.694635 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnxh7\" (UniqueName: \"kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7\") pod \"71462aef-16e4-4c41-a3ad-11664b64443d\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.694802 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam\") pod \"71462aef-16e4-4c41-a3ad-11664b64443d\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.694849 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph\") pod \"71462aef-16e4-4c41-a3ad-11664b64443d\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.694882 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0\") pod \"71462aef-16e4-4c41-a3ad-11664b64443d\" (UID: \"71462aef-16e4-4c41-a3ad-11664b64443d\") " Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.704013 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7" (OuterVolumeSpecName: "kube-api-access-cnxh7") pod "71462aef-16e4-4c41-a3ad-11664b64443d" (UID: "71462aef-16e4-4c41-a3ad-11664b64443d"). InnerVolumeSpecName "kube-api-access-cnxh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.704813 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph" (OuterVolumeSpecName: "ceph") pod "71462aef-16e4-4c41-a3ad-11664b64443d" (UID: "71462aef-16e4-4c41-a3ad-11664b64443d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.745837 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "71462aef-16e4-4c41-a3ad-11664b64443d" (UID: "71462aef-16e4-4c41-a3ad-11664b64443d"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.760588 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71462aef-16e4-4c41-a3ad-11664b64443d" (UID: "71462aef-16e4-4c41-a3ad-11664b64443d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.796849 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnxh7\" (UniqueName: \"kubernetes.io/projected/71462aef-16e4-4c41-a3ad-11664b64443d-kube-api-access-cnxh7\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.796885 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.796897 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:03 crc kubenswrapper[4762]: I0308 01:18:03.796906 4762 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/71462aef-16e4-4c41-a3ad-11664b64443d-inventory-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.025042 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" event={"ID":"71462aef-16e4-4c41-a3ad-11664b64443d","Type":"ContainerDied","Data":"66002c6f461b25f1e1aa3222dab08a4c2d3254879869e3d3429c2f5a3cc728a5"} Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.025142 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66002c6f461b25f1e1aa3222dab08a4c2d3254879869e3d3429c2f5a3cc728a5" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.025088 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-d7dkx" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.107996 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg"] Mar 08 01:18:04 crc kubenswrapper[4762]: E0308 01:18:04.108961 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71462aef-16e4-4c41-a3ad-11664b64443d" containerName="ssh-known-hosts-edpm-deployment" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.108983 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="71462aef-16e4-4c41-a3ad-11664b64443d" containerName="ssh-known-hosts-edpm-deployment" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.109253 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="71462aef-16e4-4c41-a3ad-11664b64443d" containerName="ssh-known-hosts-edpm-deployment" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.110218 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.114520 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.114738 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.114944 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.115185 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.123555 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg"] Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.129278 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.204929 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.204981 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.205087 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.205575 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8vj6\" (UniqueName: \"kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.343795 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.344684 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8vj6\" (UniqueName: \"kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.345175 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.345203 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.349516 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.351595 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.351790 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.361128 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8vj6\" (UniqueName: \"kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-4szkg\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.450473 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.451416 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.549434 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7xlv\" (UniqueName: \"kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv\") pod \"a0de1641-d236-4654-b681-56222a8d26aa\" (UID: \"a0de1641-d236-4654-b681-56222a8d26aa\") " Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.555284 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv" (OuterVolumeSpecName: "kube-api-access-d7xlv") pod "a0de1641-d236-4654-b681-56222a8d26aa" (UID: "a0de1641-d236-4654-b681-56222a8d26aa"). InnerVolumeSpecName "kube-api-access-d7xlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:18:04 crc kubenswrapper[4762]: I0308 01:18:04.653790 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7xlv\" (UniqueName: \"kubernetes.io/projected/a0de1641-d236-4654-b681-56222a8d26aa-kube-api-access-d7xlv\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.005745 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg"] Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.045661 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" event={"ID":"a0de1641-d236-4654-b681-56222a8d26aa","Type":"ContainerDied","Data":"89cf39cb86fc69bd09dcaa7e37588b3e405a50dcf767b9f7ad8cdf89844b8ade"} Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.045711 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548878-j8mv8" Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.045737 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89cf39cb86fc69bd09dcaa7e37588b3e405a50dcf767b9f7ad8cdf89844b8ade" Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.551066 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548872-fbs97"] Mar 08 01:18:05 crc kubenswrapper[4762]: I0308 01:18:05.561989 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548872-fbs97"] Mar 08 01:18:06 crc kubenswrapper[4762]: I0308 01:18:06.057356 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" event={"ID":"b2693512-468b-4a97-94f1-27bb9301963a","Type":"ContainerStarted","Data":"1afb857144232c82f2a8dd2b53ff6b3b5107718c2f5c7e586d6f8efdedad6d4b"} Mar 08 01:18:06 crc kubenswrapper[4762]: I0308 01:18:06.057858 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" event={"ID":"b2693512-468b-4a97-94f1-27bb9301963a","Type":"ContainerStarted","Data":"5d1dc5633f88b8dc8a904f72895ef86a9c412b2f9a5696242128f0a9a1419f72"} Mar 08 01:18:06 crc kubenswrapper[4762]: I0308 01:18:06.080460 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" podStartSLOduration=1.650109895 podStartE2EDuration="2.080432993s" podCreationTimestamp="2026-03-08 01:18:04 +0000 UTC" firstStartedPulling="2026-03-08 01:18:05.033034288 +0000 UTC m=+3306.507178672" lastFinishedPulling="2026-03-08 01:18:05.463357426 +0000 UTC m=+3306.937501770" observedRunningTime="2026-03-08 01:18:06.072227679 +0000 UTC m=+3307.546372033" watchObservedRunningTime="2026-03-08 01:18:06.080432993 +0000 UTC m=+3307.554577377" Mar 08 01:18:07 crc kubenswrapper[4762]: I0308 01:18:07.285454 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b931a840-b0e8-4a49-8ba9-0c658c6fa13e" path="/var/lib/kubelet/pods/b931a840-b0e8-4a49-8ba9-0c658c6fa13e/volumes" Mar 08 01:18:13 crc kubenswrapper[4762]: I0308 01:18:13.267572 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:18:13 crc kubenswrapper[4762]: E0308 01:18:13.268644 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:18:15 crc kubenswrapper[4762]: I0308 01:18:15.178934 4762 generic.go:334] "Generic (PLEG): container finished" podID="b2693512-468b-4a97-94f1-27bb9301963a" containerID="1afb857144232c82f2a8dd2b53ff6b3b5107718c2f5c7e586d6f8efdedad6d4b" exitCode=0 Mar 08 01:18:15 crc kubenswrapper[4762]: I0308 01:18:15.178999 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" event={"ID":"b2693512-468b-4a97-94f1-27bb9301963a","Type":"ContainerDied","Data":"1afb857144232c82f2a8dd2b53ff6b3b5107718c2f5c7e586d6f8efdedad6d4b"} Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.850170 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.967301 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8vj6\" (UniqueName: \"kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6\") pod \"b2693512-468b-4a97-94f1-27bb9301963a\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.967373 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam\") pod \"b2693512-468b-4a97-94f1-27bb9301963a\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.967603 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph\") pod \"b2693512-468b-4a97-94f1-27bb9301963a\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.967635 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory\") pod \"b2693512-468b-4a97-94f1-27bb9301963a\" (UID: \"b2693512-468b-4a97-94f1-27bb9301963a\") " Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.974203 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph" (OuterVolumeSpecName: "ceph") pod "b2693512-468b-4a97-94f1-27bb9301963a" (UID: "b2693512-468b-4a97-94f1-27bb9301963a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:16 crc kubenswrapper[4762]: I0308 01:18:16.976680 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6" (OuterVolumeSpecName: "kube-api-access-g8vj6") pod "b2693512-468b-4a97-94f1-27bb9301963a" (UID: "b2693512-468b-4a97-94f1-27bb9301963a"). InnerVolumeSpecName "kube-api-access-g8vj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.009079 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2693512-468b-4a97-94f1-27bb9301963a" (UID: "b2693512-468b-4a97-94f1-27bb9301963a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.022457 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory" (OuterVolumeSpecName: "inventory") pod "b2693512-468b-4a97-94f1-27bb9301963a" (UID: "b2693512-468b-4a97-94f1-27bb9301963a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.071330 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8vj6\" (UniqueName: \"kubernetes.io/projected/b2693512-468b-4a97-94f1-27bb9301963a-kube-api-access-g8vj6\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.071389 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.071414 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.071432 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2693512-468b-4a97-94f1-27bb9301963a-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.216014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" event={"ID":"b2693512-468b-4a97-94f1-27bb9301963a","Type":"ContainerDied","Data":"5d1dc5633f88b8dc8a904f72895ef86a9c412b2f9a5696242128f0a9a1419f72"} Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.216078 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d1dc5633f88b8dc8a904f72895ef86a9c412b2f9a5696242128f0a9a1419f72" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.216159 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-4szkg" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.331308 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j"] Mar 08 01:18:17 crc kubenswrapper[4762]: E0308 01:18:17.332206 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2693512-468b-4a97-94f1-27bb9301963a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.332231 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2693512-468b-4a97-94f1-27bb9301963a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:17 crc kubenswrapper[4762]: E0308 01:18:17.332269 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0de1641-d236-4654-b681-56222a8d26aa" containerName="oc" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.332278 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0de1641-d236-4654-b681-56222a8d26aa" containerName="oc" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.332500 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0de1641-d236-4654-b681-56222a8d26aa" containerName="oc" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.332550 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2693512-468b-4a97-94f1-27bb9301963a" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.333424 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.341649 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.342405 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.342473 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.342549 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.342651 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.366107 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j"] Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.376955 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.377029 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2km6\" (UniqueName: \"kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.377057 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.377139 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.479690 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.479866 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2km6\" (UniqueName: \"kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.479926 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.480076 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.485031 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.485580 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.498422 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.501335 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2km6\" (UniqueName: \"kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:17 crc kubenswrapper[4762]: I0308 01:18:17.664048 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:18 crc kubenswrapper[4762]: I0308 01:18:18.343294 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j"] Mar 08 01:18:19 crc kubenswrapper[4762]: I0308 01:18:19.250478 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" event={"ID":"11eeb278-f92e-410f-93b6-1797527d31ad","Type":"ContainerStarted","Data":"d84862a0875d0e9428065536b1849f9ba96ec28bf882d6b1d9f5d07f1e0bb528"} Mar 08 01:18:19 crc kubenswrapper[4762]: I0308 01:18:19.251218 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" event={"ID":"11eeb278-f92e-410f-93b6-1797527d31ad","Type":"ContainerStarted","Data":"ffae155f7ed4d73e859b6d96acdfda895eb55421a97989771bc9ad37652eb22b"} Mar 08 01:18:19 crc kubenswrapper[4762]: I0308 01:18:19.307426 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" podStartSLOduration=1.827043196 podStartE2EDuration="2.307389242s" podCreationTimestamp="2026-03-08 01:18:17 +0000 UTC" firstStartedPulling="2026-03-08 01:18:18.344009766 +0000 UTC m=+3319.818154110" lastFinishedPulling="2026-03-08 01:18:18.824355782 +0000 UTC m=+3320.298500156" observedRunningTime="2026-03-08 01:18:19.27536291 +0000 UTC m=+3320.749507264" watchObservedRunningTime="2026-03-08 01:18:19.307389242 +0000 UTC m=+3320.781533616" Mar 08 01:18:27 crc kubenswrapper[4762]: I0308 01:18:27.266131 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:18:27 crc kubenswrapper[4762]: E0308 01:18:27.267212 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:18:31 crc kubenswrapper[4762]: I0308 01:18:31.395269 4762 generic.go:334] "Generic (PLEG): container finished" podID="11eeb278-f92e-410f-93b6-1797527d31ad" containerID="d84862a0875d0e9428065536b1849f9ba96ec28bf882d6b1d9f5d07f1e0bb528" exitCode=0 Mar 08 01:18:31 crc kubenswrapper[4762]: I0308 01:18:31.395358 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" event={"ID":"11eeb278-f92e-410f-93b6-1797527d31ad","Type":"ContainerDied","Data":"d84862a0875d0e9428065536b1849f9ba96ec28bf882d6b1d9f5d07f1e0bb528"} Mar 08 01:18:31 crc kubenswrapper[4762]: I0308 01:18:31.427412 4762 scope.go:117] "RemoveContainer" containerID="6649ed867da85a6a3139a03678e151fa00641067212d82c1b47624469309211b" Mar 08 01:18:32 crc kubenswrapper[4762]: I0308 01:18:32.986041 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.118235 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph\") pod \"11eeb278-f92e-410f-93b6-1797527d31ad\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.118373 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2km6\" (UniqueName: \"kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6\") pod \"11eeb278-f92e-410f-93b6-1797527d31ad\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.118514 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam\") pod \"11eeb278-f92e-410f-93b6-1797527d31ad\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.118565 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory\") pod \"11eeb278-f92e-410f-93b6-1797527d31ad\" (UID: \"11eeb278-f92e-410f-93b6-1797527d31ad\") " Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.124361 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6" (OuterVolumeSpecName: "kube-api-access-c2km6") pod "11eeb278-f92e-410f-93b6-1797527d31ad" (UID: "11eeb278-f92e-410f-93b6-1797527d31ad"). InnerVolumeSpecName "kube-api-access-c2km6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.128894 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph" (OuterVolumeSpecName: "ceph") pod "11eeb278-f92e-410f-93b6-1797527d31ad" (UID: "11eeb278-f92e-410f-93b6-1797527d31ad"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.150457 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory" (OuterVolumeSpecName: "inventory") pod "11eeb278-f92e-410f-93b6-1797527d31ad" (UID: "11eeb278-f92e-410f-93b6-1797527d31ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.151530 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "11eeb278-f92e-410f-93b6-1797527d31ad" (UID: "11eeb278-f92e-410f-93b6-1797527d31ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.220652 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.220692 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2km6\" (UniqueName: \"kubernetes.io/projected/11eeb278-f92e-410f-93b6-1797527d31ad-kube-api-access-c2km6\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.220708 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.220719 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/11eeb278-f92e-410f-93b6-1797527d31ad-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.428650 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" event={"ID":"11eeb278-f92e-410f-93b6-1797527d31ad","Type":"ContainerDied","Data":"ffae155f7ed4d73e859b6d96acdfda895eb55421a97989771bc9ad37652eb22b"} Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.428940 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffae155f7ed4d73e859b6d96acdfda895eb55421a97989771bc9ad37652eb22b" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.428738 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.537733 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h"] Mar 08 01:18:33 crc kubenswrapper[4762]: E0308 01:18:33.538889 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eeb278-f92e-410f-93b6-1797527d31ad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.538916 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eeb278-f92e-410f-93b6-1797527d31ad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.539191 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eeb278-f92e-410f-93b6-1797527d31ad" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.540059 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.542366 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.542670 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.542821 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.543031 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.543800 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.544167 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.544171 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.544213 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.544219 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.550067 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.550439 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h"] Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.629604 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.629710 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.629790 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.629838 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.629874 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630042 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630108 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630149 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630216 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630566 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630607 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630636 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630663 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmkf\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630702 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630752 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.630797 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732530 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732611 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732645 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732706 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732799 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732833 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732867 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732905 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732950 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.732978 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733003 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733038 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733088 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733126 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733156 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733182 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.733212 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qmkf\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.741984 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.744217 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.744258 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.744543 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.746585 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.746602 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.747216 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.747390 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.747385 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.747442 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.747623 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.748290 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.749714 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.750061 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qmkf\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.750310 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.755128 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.765489 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-gt49h\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:33 crc kubenswrapper[4762]: I0308 01:18:33.873850 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:18:34 crc kubenswrapper[4762]: I0308 01:18:34.489378 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h"] Mar 08 01:18:35 crc kubenswrapper[4762]: I0308 01:18:35.451566 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" event={"ID":"25c01bbb-608e-408d-acd4-636bf28176e3","Type":"ContainerStarted","Data":"644eabc967de7a72669714c636e9be6f7ba1ea65c134f61d067f95192e9ef6e9"} Mar 08 01:18:35 crc kubenswrapper[4762]: I0308 01:18:35.452262 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" event={"ID":"25c01bbb-608e-408d-acd4-636bf28176e3","Type":"ContainerStarted","Data":"a8f66831390cdbabd750e09b01df33dc64dac9d66dd0c40469304a9d5038eab7"} Mar 08 01:18:35 crc kubenswrapper[4762]: I0308 01:18:35.488146 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" podStartSLOduration=2.051590707 podStartE2EDuration="2.488121977s" podCreationTimestamp="2026-03-08 01:18:33 +0000 UTC" firstStartedPulling="2026-03-08 01:18:34.472689671 +0000 UTC m=+3335.946834025" lastFinishedPulling="2026-03-08 01:18:34.909220951 +0000 UTC m=+3336.383365295" observedRunningTime="2026-03-08 01:18:35.483855385 +0000 UTC m=+3336.957999769" watchObservedRunningTime="2026-03-08 01:18:35.488121977 +0000 UTC m=+3336.962266351" Mar 08 01:18:40 crc kubenswrapper[4762]: I0308 01:18:40.263821 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:18:40 crc kubenswrapper[4762]: E0308 01:18:40.265549 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:18:51 crc kubenswrapper[4762]: I0308 01:18:51.264139 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:18:51 crc kubenswrapper[4762]: E0308 01:18:51.265015 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:19:03 crc kubenswrapper[4762]: I0308 01:19:03.264123 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:19:03 crc kubenswrapper[4762]: E0308 01:19:03.265521 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:19:15 crc kubenswrapper[4762]: I0308 01:19:15.264789 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:19:15 crc kubenswrapper[4762]: E0308 01:19:15.265878 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:19:28 crc kubenswrapper[4762]: I0308 01:19:28.263006 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:19:28 crc kubenswrapper[4762]: E0308 01:19:28.263895 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:19:34 crc kubenswrapper[4762]: I0308 01:19:34.213149 4762 generic.go:334] "Generic (PLEG): container finished" podID="25c01bbb-608e-408d-acd4-636bf28176e3" containerID="644eabc967de7a72669714c636e9be6f7ba1ea65c134f61d067f95192e9ef6e9" exitCode=0 Mar 08 01:19:34 crc kubenswrapper[4762]: I0308 01:19:34.213231 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" event={"ID":"25c01bbb-608e-408d-acd4-636bf28176e3","Type":"ContainerDied","Data":"644eabc967de7a72669714c636e9be6f7ba1ea65c134f61d067f95192e9ef6e9"} Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.837900 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.982925 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qmkf\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.982984 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983024 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983044 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983122 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983147 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983165 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983183 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983203 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983244 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983286 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983326 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983433 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983466 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983494 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983543 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.983559 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle\") pod \"25c01bbb-608e-408d-acd4-636bf28176e3\" (UID: \"25c01bbb-608e-408d-acd4-636bf28176e3\") " Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992276 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992330 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992395 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992394 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992509 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.992625 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.993363 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.994946 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf" (OuterVolumeSpecName: "kube-api-access-8qmkf") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "kube-api-access-8qmkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.994999 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.995913 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.996060 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.996866 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.996874 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:35 crc kubenswrapper[4762]: I0308 01:19:35.997583 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.001547 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph" (OuterVolumeSpecName: "ceph") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.021581 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory" (OuterVolumeSpecName: "inventory") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.033250 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "25c01bbb-608e-408d-acd4-636bf28176e3" (UID: "25c01bbb-608e-408d-acd4-636bf28176e3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085863 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085894 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085908 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085920 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085935 4762 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085949 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qmkf\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-kube-api-access-8qmkf\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085960 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085973 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085982 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.085991 4762 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086001 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086012 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086023 4762 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086033 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086046 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/25c01bbb-608e-408d-acd4-636bf28176e3-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086058 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.086069 4762 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25c01bbb-608e-408d-acd4-636bf28176e3-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.246936 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" event={"ID":"25c01bbb-608e-408d-acd4-636bf28176e3","Type":"ContainerDied","Data":"a8f66831390cdbabd750e09b01df33dc64dac9d66dd0c40469304a9d5038eab7"} Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.247030 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f66831390cdbabd750e09b01df33dc64dac9d66dd0c40469304a9d5038eab7" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.247173 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-gt49h" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.426988 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k"] Mar 08 01:19:36 crc kubenswrapper[4762]: E0308 01:19:36.427731 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c01bbb-608e-408d-acd4-636bf28176e3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.427868 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c01bbb-608e-408d-acd4-636bf28176e3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.428262 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c01bbb-608e-408d-acd4-636bf28176e3" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.429290 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.440025 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.440564 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.440804 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.441601 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.441914 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.452974 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k"] Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.495736 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.495871 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8g56\" (UniqueName: \"kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.495936 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.496020 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.598230 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.598292 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8g56\" (UniqueName: \"kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.598331 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.598383 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.605940 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.606625 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.627060 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.636286 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8g56\" (UniqueName: \"kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:36 crc kubenswrapper[4762]: I0308 01:19:36.753849 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:37 crc kubenswrapper[4762]: I0308 01:19:37.391001 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k"] Mar 08 01:19:38 crc kubenswrapper[4762]: I0308 01:19:38.273084 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" event={"ID":"cfed1788-c505-44a2-93ee-cdb89f86f1a7","Type":"ContainerStarted","Data":"2cacb51e77afd71934bcf1feeac8947947708b86a68045f35b83964efadb3b2d"} Mar 08 01:19:38 crc kubenswrapper[4762]: I0308 01:19:38.273447 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" event={"ID":"cfed1788-c505-44a2-93ee-cdb89f86f1a7","Type":"ContainerStarted","Data":"2f42f757ffccfa5c9baaaa138b69e27aa842a82083f4534773033eee5d451df6"} Mar 08 01:19:38 crc kubenswrapper[4762]: I0308 01:19:38.300968 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" podStartSLOduration=1.793472255 podStartE2EDuration="2.30094988s" podCreationTimestamp="2026-03-08 01:19:36 +0000 UTC" firstStartedPulling="2026-03-08 01:19:37.393223427 +0000 UTC m=+3398.867367801" lastFinishedPulling="2026-03-08 01:19:37.900701072 +0000 UTC m=+3399.374845426" observedRunningTime="2026-03-08 01:19:38.291930561 +0000 UTC m=+3399.766074945" watchObservedRunningTime="2026-03-08 01:19:38.30094988 +0000 UTC m=+3399.775094214" Mar 08 01:19:39 crc kubenswrapper[4762]: I0308 01:19:39.283473 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:19:39 crc kubenswrapper[4762]: E0308 01:19:39.286636 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:19:45 crc kubenswrapper[4762]: I0308 01:19:45.359316 4762 generic.go:334] "Generic (PLEG): container finished" podID="cfed1788-c505-44a2-93ee-cdb89f86f1a7" containerID="2cacb51e77afd71934bcf1feeac8947947708b86a68045f35b83964efadb3b2d" exitCode=0 Mar 08 01:19:45 crc kubenswrapper[4762]: I0308 01:19:45.359419 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" event={"ID":"cfed1788-c505-44a2-93ee-cdb89f86f1a7","Type":"ContainerDied","Data":"2cacb51e77afd71934bcf1feeac8947947708b86a68045f35b83964efadb3b2d"} Mar 08 01:19:46 crc kubenswrapper[4762]: I0308 01:19:46.895523 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.082808 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam\") pod \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.082885 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory\") pod \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.082958 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph\") pod \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.083237 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8g56\" (UniqueName: \"kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56\") pod \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\" (UID: \"cfed1788-c505-44a2-93ee-cdb89f86f1a7\") " Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.089191 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph" (OuterVolumeSpecName: "ceph") pod "cfed1788-c505-44a2-93ee-cdb89f86f1a7" (UID: "cfed1788-c505-44a2-93ee-cdb89f86f1a7"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.091409 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56" (OuterVolumeSpecName: "kube-api-access-k8g56") pod "cfed1788-c505-44a2-93ee-cdb89f86f1a7" (UID: "cfed1788-c505-44a2-93ee-cdb89f86f1a7"). InnerVolumeSpecName "kube-api-access-k8g56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.113376 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cfed1788-c505-44a2-93ee-cdb89f86f1a7" (UID: "cfed1788-c505-44a2-93ee-cdb89f86f1a7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.115805 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory" (OuterVolumeSpecName: "inventory") pod "cfed1788-c505-44a2-93ee-cdb89f86f1a7" (UID: "cfed1788-c505-44a2-93ee-cdb89f86f1a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.186102 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.186134 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.186144 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cfed1788-c505-44a2-93ee-cdb89f86f1a7-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.186152 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8g56\" (UniqueName: \"kubernetes.io/projected/cfed1788-c505-44a2-93ee-cdb89f86f1a7-kube-api-access-k8g56\") on node \"crc\" DevicePath \"\"" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.384138 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" event={"ID":"cfed1788-c505-44a2-93ee-cdb89f86f1a7","Type":"ContainerDied","Data":"2f42f757ffccfa5c9baaaa138b69e27aa842a82083f4534773033eee5d451df6"} Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.384204 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f42f757ffccfa5c9baaaa138b69e27aa842a82083f4534773033eee5d451df6" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.384315 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.665045 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527"] Mar 08 01:19:47 crc kubenswrapper[4762]: E0308 01:19:47.665884 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfed1788-c505-44a2-93ee-cdb89f86f1a7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.665911 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfed1788-c505-44a2-93ee-cdb89f86f1a7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.666317 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfed1788-c505-44a2-93ee-cdb89f86f1a7" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.667541 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.670221 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.670457 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.670458 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.670508 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.671226 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.672384 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.682957 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527"] Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800452 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800542 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800570 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800614 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87kb6\" (UniqueName: \"kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800632 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.800653 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.901963 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.902257 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.902293 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.902356 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87kb6\" (UniqueName: \"kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.902381 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.902413 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.903415 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.908223 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.908456 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.910375 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.910941 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:47 crc kubenswrapper[4762]: I0308 01:19:47.932947 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87kb6\" (UniqueName: \"kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-58527\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:48 crc kubenswrapper[4762]: I0308 01:19:48.035845 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:19:48 crc kubenswrapper[4762]: I0308 01:19:48.659194 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527"] Mar 08 01:19:49 crc kubenswrapper[4762]: I0308 01:19:49.407481 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" event={"ID":"9644de97-590e-4e5d-b951-241947044e95","Type":"ContainerStarted","Data":"7d7f6ce95c4f10b5f1c8c63c133d3f1c1f7a03d6d4e3c3f90bfe3e625756659c"} Mar 08 01:19:50 crc kubenswrapper[4762]: I0308 01:19:50.424863 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" event={"ID":"9644de97-590e-4e5d-b951-241947044e95","Type":"ContainerStarted","Data":"0545899f3046e663104641436ef55c88477d6f863f01cce42f0818b0fc8d342a"} Mar 08 01:19:50 crc kubenswrapper[4762]: I0308 01:19:50.463551 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" podStartSLOduration=2.992604214 podStartE2EDuration="3.463521658s" podCreationTimestamp="2026-03-08 01:19:47 +0000 UTC" firstStartedPulling="2026-03-08 01:19:48.660956471 +0000 UTC m=+3410.135100825" lastFinishedPulling="2026-03-08 01:19:49.131873925 +0000 UTC m=+3410.606018269" observedRunningTime="2026-03-08 01:19:50.450457434 +0000 UTC m=+3411.924601818" watchObservedRunningTime="2026-03-08 01:19:50.463521658 +0000 UTC m=+3411.937666032" Mar 08 01:19:51 crc kubenswrapper[4762]: I0308 01:19:51.263884 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:19:51 crc kubenswrapper[4762]: E0308 01:19:51.264466 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.167235 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548880-zg8tk"] Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.169376 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.173792 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.175502 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.176305 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.179033 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548880-zg8tk"] Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.323441 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjvld\" (UniqueName: \"kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld\") pod \"auto-csr-approver-29548880-zg8tk\" (UID: \"206cd4a3-f2b1-4097-bc50-14a4c2042c01\") " pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.426018 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjvld\" (UniqueName: \"kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld\") pod \"auto-csr-approver-29548880-zg8tk\" (UID: \"206cd4a3-f2b1-4097-bc50-14a4c2042c01\") " pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.448453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjvld\" (UniqueName: \"kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld\") pod \"auto-csr-approver-29548880-zg8tk\" (UID: \"206cd4a3-f2b1-4097-bc50-14a4c2042c01\") " pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:00 crc kubenswrapper[4762]: I0308 01:20:00.509789 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:01 crc kubenswrapper[4762]: I0308 01:20:01.071625 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548880-zg8tk"] Mar 08 01:20:01 crc kubenswrapper[4762]: W0308 01:20:01.080664 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod206cd4a3_f2b1_4097_bc50_14a4c2042c01.slice/crio-d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902 WatchSource:0}: Error finding container d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902: Status 404 returned error can't find the container with id d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902 Mar 08 01:20:01 crc kubenswrapper[4762]: I0308 01:20:01.573317 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" event={"ID":"206cd4a3-f2b1-4097-bc50-14a4c2042c01","Type":"ContainerStarted","Data":"d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902"} Mar 08 01:20:02 crc kubenswrapper[4762]: I0308 01:20:02.263988 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:20:02 crc kubenswrapper[4762]: E0308 01:20:02.264700 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:03 crc kubenswrapper[4762]: I0308 01:20:03.594493 4762 generic.go:334] "Generic (PLEG): container finished" podID="206cd4a3-f2b1-4097-bc50-14a4c2042c01" containerID="2fbd0346c0abe89246080d3b86033c92a913f9afa97b81e987b1e55053bf3d13" exitCode=0 Mar 08 01:20:03 crc kubenswrapper[4762]: I0308 01:20:03.594569 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" event={"ID":"206cd4a3-f2b1-4097-bc50-14a4c2042c01","Type":"ContainerDied","Data":"2fbd0346c0abe89246080d3b86033c92a913f9afa97b81e987b1e55053bf3d13"} Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.035202 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.172853 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjvld\" (UniqueName: \"kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld\") pod \"206cd4a3-f2b1-4097-bc50-14a4c2042c01\" (UID: \"206cd4a3-f2b1-4097-bc50-14a4c2042c01\") " Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.185142 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld" (OuterVolumeSpecName: "kube-api-access-sjvld") pod "206cd4a3-f2b1-4097-bc50-14a4c2042c01" (UID: "206cd4a3-f2b1-4097-bc50-14a4c2042c01"). InnerVolumeSpecName "kube-api-access-sjvld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.275513 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjvld\" (UniqueName: \"kubernetes.io/projected/206cd4a3-f2b1-4097-bc50-14a4c2042c01-kube-api-access-sjvld\") on node \"crc\" DevicePath \"\"" Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.620881 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" event={"ID":"206cd4a3-f2b1-4097-bc50-14a4c2042c01","Type":"ContainerDied","Data":"d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902"} Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.621133 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7991eac25389ee294730d3cb514025b03b50cb3c7182b606d07ea5a7162d902" Mar 08 01:20:05 crc kubenswrapper[4762]: I0308 01:20:05.621184 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548880-zg8tk" Mar 08 01:20:06 crc kubenswrapper[4762]: I0308 01:20:06.120593 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548874-5bw9s"] Mar 08 01:20:06 crc kubenswrapper[4762]: I0308 01:20:06.131421 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548874-5bw9s"] Mar 08 01:20:07 crc kubenswrapper[4762]: I0308 01:20:07.281171 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d4d8ff-7a0f-4055-8647-47cbac3d4d6a" path="/var/lib/kubelet/pods/36d4d8ff-7a0f-4055-8647-47cbac3d4d6a/volumes" Mar 08 01:20:17 crc kubenswrapper[4762]: I0308 01:20:17.264350 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:20:17 crc kubenswrapper[4762]: E0308 01:20:17.272069 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:30 crc kubenswrapper[4762]: I0308 01:20:30.264385 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:20:30 crc kubenswrapper[4762]: E0308 01:20:30.265580 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:31 crc kubenswrapper[4762]: I0308 01:20:31.565938 4762 scope.go:117] "RemoveContainer" containerID="5fa92b998a5ea28268e54d25f38f6b6bb737cd74179018b7e6339df3b2674863" Mar 08 01:20:41 crc kubenswrapper[4762]: I0308 01:20:41.271931 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:20:41 crc kubenswrapper[4762]: E0308 01:20:41.276693 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.410580 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:42 crc kubenswrapper[4762]: E0308 01:20:42.411340 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="206cd4a3-f2b1-4097-bc50-14a4c2042c01" containerName="oc" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.411364 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="206cd4a3-f2b1-4097-bc50-14a4c2042c01" containerName="oc" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.411859 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="206cd4a3-f2b1-4097-bc50-14a4c2042c01" containerName="oc" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.418931 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.433318 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.507150 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.507265 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.507706 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq5c\" (UniqueName: \"kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.609746 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.609875 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jq5c\" (UniqueName: \"kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.609949 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.610631 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.610688 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.632708 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jq5c\" (UniqueName: \"kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c\") pod \"community-operators-86v7v\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:42 crc kubenswrapper[4762]: I0308 01:20:42.750055 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:43 crc kubenswrapper[4762]: I0308 01:20:43.247459 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:44 crc kubenswrapper[4762]: I0308 01:20:44.102167 4762 generic.go:334] "Generic (PLEG): container finished" podID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerID="450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa" exitCode=0 Mar 08 01:20:44 crc kubenswrapper[4762]: I0308 01:20:44.102285 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerDied","Data":"450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa"} Mar 08 01:20:44 crc kubenswrapper[4762]: I0308 01:20:44.102505 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerStarted","Data":"238c665d4bb63fe548c17269cd42b36036c56c8f2a105bceaa36c0b04fe762a4"} Mar 08 01:20:45 crc kubenswrapper[4762]: I0308 01:20:45.115668 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerStarted","Data":"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7"} Mar 08 01:20:46 crc kubenswrapper[4762]: I0308 01:20:46.134629 4762 generic.go:334] "Generic (PLEG): container finished" podID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerID="3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7" exitCode=0 Mar 08 01:20:46 crc kubenswrapper[4762]: I0308 01:20:46.134885 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerDied","Data":"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7"} Mar 08 01:20:47 crc kubenswrapper[4762]: I0308 01:20:47.153927 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerStarted","Data":"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07"} Mar 08 01:20:47 crc kubenswrapper[4762]: I0308 01:20:47.174833 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-86v7v" podStartSLOduration=2.723032393 podStartE2EDuration="5.174817894s" podCreationTimestamp="2026-03-08 01:20:42 +0000 UTC" firstStartedPulling="2026-03-08 01:20:44.10804651 +0000 UTC m=+3465.582190854" lastFinishedPulling="2026-03-08 01:20:46.559832011 +0000 UTC m=+3468.033976355" observedRunningTime="2026-03-08 01:20:47.170405317 +0000 UTC m=+3468.644549701" watchObservedRunningTime="2026-03-08 01:20:47.174817894 +0000 UTC m=+3468.648962238" Mar 08 01:20:52 crc kubenswrapper[4762]: I0308 01:20:52.751114 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:52 crc kubenswrapper[4762]: I0308 01:20:52.751848 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:52 crc kubenswrapper[4762]: I0308 01:20:52.825476 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:53 crc kubenswrapper[4762]: I0308 01:20:53.295638 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:53 crc kubenswrapper[4762]: I0308 01:20:53.353856 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:55 crc kubenswrapper[4762]: I0308 01:20:55.256489 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-86v7v" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="registry-server" containerID="cri-o://f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07" gracePeriod=2 Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.807732 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.949373 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities\") pod \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.949461 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jq5c\" (UniqueName: \"kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c\") pod \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.949502 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content\") pod \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\" (UID: \"8eafd1d3-e59f-4003-a7bd-61be24f069b0\") " Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.951179 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities" (OuterVolumeSpecName: "utilities") pod "8eafd1d3-e59f-4003-a7bd-61be24f069b0" (UID: "8eafd1d3-e59f-4003-a7bd-61be24f069b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.954742 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c" (OuterVolumeSpecName: "kube-api-access-7jq5c") pod "8eafd1d3-e59f-4003-a7bd-61be24f069b0" (UID: "8eafd1d3-e59f-4003-a7bd-61be24f069b0"). InnerVolumeSpecName "kube-api-access-7jq5c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:55.999466 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8eafd1d3-e59f-4003-a7bd-61be24f069b0" (UID: "8eafd1d3-e59f-4003-a7bd-61be24f069b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.051865 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.051893 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jq5c\" (UniqueName: \"kubernetes.io/projected/8eafd1d3-e59f-4003-a7bd-61be24f069b0-kube-api-access-7jq5c\") on node \"crc\" DevicePath \"\"" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.051907 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8eafd1d3-e59f-4003-a7bd-61be24f069b0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.264369 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:20:56 crc kubenswrapper[4762]: E0308 01:20:56.265159 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.272357 4762 generic.go:334] "Generic (PLEG): container finished" podID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerID="f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07" exitCode=0 Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.272413 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerDied","Data":"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07"} Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.272434 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-86v7v" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.272464 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-86v7v" event={"ID":"8eafd1d3-e59f-4003-a7bd-61be24f069b0","Type":"ContainerDied","Data":"238c665d4bb63fe548c17269cd42b36036c56c8f2a105bceaa36c0b04fe762a4"} Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.272493 4762 scope.go:117] "RemoveContainer" containerID="f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.319863 4762 scope.go:117] "RemoveContainer" containerID="3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.335251 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.352740 4762 scope.go:117] "RemoveContainer" containerID="450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.353473 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-86v7v"] Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.413730 4762 scope.go:117] "RemoveContainer" containerID="f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07" Mar 08 01:20:56 crc kubenswrapper[4762]: E0308 01:20:56.414321 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07\": container with ID starting with f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07 not found: ID does not exist" containerID="f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.414425 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07"} err="failed to get container status \"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07\": rpc error: code = NotFound desc = could not find container \"f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07\": container with ID starting with f6a5c1f403c3a88591a111fc9defca7705b78c3f16ab79134d8d7556fff66a07 not found: ID does not exist" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.414511 4762 scope.go:117] "RemoveContainer" containerID="3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7" Mar 08 01:20:56 crc kubenswrapper[4762]: E0308 01:20:56.414951 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7\": container with ID starting with 3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7 not found: ID does not exist" containerID="3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.414997 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7"} err="failed to get container status \"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7\": rpc error: code = NotFound desc = could not find container \"3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7\": container with ID starting with 3a9afd13143e43adcd303464b13a1c29cf8e218a61876941d41d2337a9ac92c7 not found: ID does not exist" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.415039 4762 scope.go:117] "RemoveContainer" containerID="450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa" Mar 08 01:20:56 crc kubenswrapper[4762]: E0308 01:20:56.415329 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa\": container with ID starting with 450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa not found: ID does not exist" containerID="450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa" Mar 08 01:20:56 crc kubenswrapper[4762]: I0308 01:20:56.415354 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa"} err="failed to get container status \"450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa\": rpc error: code = NotFound desc = could not find container \"450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa\": container with ID starting with 450043d42611fae7a44adc6cd58927754f39fa7bd114e54e1a2ba6dfbd6ff5fa not found: ID does not exist" Mar 08 01:20:57 crc kubenswrapper[4762]: I0308 01:20:57.305381 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" path="/var/lib/kubelet/pods/8eafd1d3-e59f-4003-a7bd-61be24f069b0/volumes" Mar 08 01:21:07 crc kubenswrapper[4762]: I0308 01:21:07.443655 4762 generic.go:334] "Generic (PLEG): container finished" podID="9644de97-590e-4e5d-b951-241947044e95" containerID="0545899f3046e663104641436ef55c88477d6f863f01cce42f0818b0fc8d342a" exitCode=0 Mar 08 01:21:07 crc kubenswrapper[4762]: I0308 01:21:07.443809 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" event={"ID":"9644de97-590e-4e5d-b951-241947044e95","Type":"ContainerDied","Data":"0545899f3046e663104641436ef55c88477d6f863f01cce42f0818b0fc8d342a"} Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.068071 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.237291 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.237930 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.238135 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.238450 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.238531 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.238615 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87kb6\" (UniqueName: \"kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6\") pod \"9644de97-590e-4e5d-b951-241947044e95\" (UID: \"9644de97-590e-4e5d-b951-241947044e95\") " Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.243288 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6" (OuterVolumeSpecName: "kube-api-access-87kb6") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "kube-api-access-87kb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.243920 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.249973 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph" (OuterVolumeSpecName: "ceph") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.267696 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.283609 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.284973 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory" (OuterVolumeSpecName: "inventory") pod "9644de97-590e-4e5d-b951-241947044e95" (UID: "9644de97-590e-4e5d-b951-241947044e95"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342456 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87kb6\" (UniqueName: \"kubernetes.io/projected/9644de97-590e-4e5d-b951-241947044e95-kube-api-access-87kb6\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342500 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342514 4762 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/9644de97-590e-4e5d-b951-241947044e95-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342526 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342539 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.342551 4762 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9644de97-590e-4e5d-b951-241947044e95-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.477875 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" event={"ID":"9644de97-590e-4e5d-b951-241947044e95","Type":"ContainerDied","Data":"7d7f6ce95c4f10b5f1c8c63c133d3f1c1f7a03d6d4e3c3f90bfe3e625756659c"} Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.477980 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7f6ce95c4f10b5f1c8c63c133d3f1c1f7a03d6d4e3c3f90bfe3e625756659c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.478125 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-58527" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.600950 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c"] Mar 08 01:21:09 crc kubenswrapper[4762]: E0308 01:21:09.602490 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9644de97-590e-4e5d-b951-241947044e95" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602518 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9644de97-590e-4e5d-b951-241947044e95" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 01:21:09 crc kubenswrapper[4762]: E0308 01:21:09.602551 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="registry-server" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602559 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="registry-server" Mar 08 01:21:09 crc kubenswrapper[4762]: E0308 01:21:09.602597 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="extract-utilities" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602606 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="extract-utilities" Mar 08 01:21:09 crc kubenswrapper[4762]: E0308 01:21:09.602630 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="extract-content" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602638 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="extract-content" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602904 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8eafd1d3-e59f-4003-a7bd-61be24f069b0" containerName="registry-server" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.602933 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9644de97-590e-4e5d-b951-241947044e95" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.603787 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611234 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611279 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611499 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611553 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611648 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611750 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.611817 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.623888 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c"] Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.751784 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.751883 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.752401 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txsns\" (UniqueName: \"kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.752492 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.752675 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.752830 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.753177 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.854820 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.854890 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.854991 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.855061 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.855101 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.855149 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txsns\" (UniqueName: \"kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.855173 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.859781 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.860209 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.860246 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.860999 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.862835 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.864048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.873260 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txsns\" (UniqueName: \"kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:09 crc kubenswrapper[4762]: I0308 01:21:09.939866 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:21:10 crc kubenswrapper[4762]: I0308 01:21:10.293826 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:21:10 crc kubenswrapper[4762]: E0308 01:21:10.296624 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:21:10 crc kubenswrapper[4762]: I0308 01:21:10.564969 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c"] Mar 08 01:21:11 crc kubenswrapper[4762]: I0308 01:21:11.503348 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" event={"ID":"6fdab820-9b0c-4bb9-b3aa-c00329fcf356","Type":"ContainerStarted","Data":"8fe2f6abaffd65986ee4eb162b5f1e5890f168c11f977aaa1b5a618a399e4378"} Mar 08 01:21:11 crc kubenswrapper[4762]: I0308 01:21:11.504076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" event={"ID":"6fdab820-9b0c-4bb9-b3aa-c00329fcf356","Type":"ContainerStarted","Data":"e146073a603c7923a94e7980eabae9f1989d9db7fa64bfa1b002f358a4354631"} Mar 08 01:21:11 crc kubenswrapper[4762]: I0308 01:21:11.535220 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" podStartSLOduration=1.981046217 podStartE2EDuration="2.535194737s" podCreationTimestamp="2026-03-08 01:21:09 +0000 UTC" firstStartedPulling="2026-03-08 01:21:10.568114998 +0000 UTC m=+3492.042259372" lastFinishedPulling="2026-03-08 01:21:11.122263538 +0000 UTC m=+3492.596407892" observedRunningTime="2026-03-08 01:21:11.52202789 +0000 UTC m=+3492.996172284" watchObservedRunningTime="2026-03-08 01:21:11.535194737 +0000 UTC m=+3493.009339091" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.643959 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.658942 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.663359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z68d\" (UniqueName: \"kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.664050 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.664282 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.674556 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.765704 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z68d\" (UniqueName: \"kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.765850 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.766046 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.766326 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.766525 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.791590 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z68d\" (UniqueName: \"kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d\") pod \"certified-operators-grnk9\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:12 crc kubenswrapper[4762]: I0308 01:21:12.989431 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:13 crc kubenswrapper[4762]: I0308 01:21:13.502499 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:13 crc kubenswrapper[4762]: I0308 01:21:13.524063 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerStarted","Data":"b1197b80e6860bd880a72793061fa0697adc9873e746c9fd4fc9bd1a87e90281"} Mar 08 01:21:14 crc kubenswrapper[4762]: I0308 01:21:14.544897 4762 generic.go:334] "Generic (PLEG): container finished" podID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerID="e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83" exitCode=0 Mar 08 01:21:14 crc kubenswrapper[4762]: I0308 01:21:14.545234 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerDied","Data":"e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83"} Mar 08 01:21:15 crc kubenswrapper[4762]: I0308 01:21:15.559385 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerStarted","Data":"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d"} Mar 08 01:21:17 crc kubenswrapper[4762]: I0308 01:21:17.588929 4762 generic.go:334] "Generic (PLEG): container finished" podID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerID="2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d" exitCode=0 Mar 08 01:21:17 crc kubenswrapper[4762]: I0308 01:21:17.588984 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerDied","Data":"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d"} Mar 08 01:21:18 crc kubenswrapper[4762]: I0308 01:21:18.603545 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerStarted","Data":"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d"} Mar 08 01:21:18 crc kubenswrapper[4762]: I0308 01:21:18.659896 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-grnk9" podStartSLOduration=3.19711502 podStartE2EDuration="6.659869538s" podCreationTimestamp="2026-03-08 01:21:12 +0000 UTC" firstStartedPulling="2026-03-08 01:21:14.551079746 +0000 UTC m=+3496.025224130" lastFinishedPulling="2026-03-08 01:21:18.013834294 +0000 UTC m=+3499.487978648" observedRunningTime="2026-03-08 01:21:18.63828564 +0000 UTC m=+3500.112429994" watchObservedRunningTime="2026-03-08 01:21:18.659869538 +0000 UTC m=+3500.134013882" Mar 08 01:21:22 crc kubenswrapper[4762]: I0308 01:21:22.990512 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:22 crc kubenswrapper[4762]: I0308 01:21:22.990884 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:23 crc kubenswrapper[4762]: I0308 01:21:23.263370 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:21:23 crc kubenswrapper[4762]: E0308 01:21:23.264239 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:21:24 crc kubenswrapper[4762]: I0308 01:21:24.051079 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-grnk9" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="registry-server" probeResult="failure" output=< Mar 08 01:21:24 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:21:24 crc kubenswrapper[4762]: > Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.422939 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.425442 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.437154 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.584445 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.584532 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.584690 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js76l\" (UniqueName: \"kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.686942 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.687022 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.687083 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js76l\" (UniqueName: \"kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.687531 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.687802 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.709547 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js76l\" (UniqueName: \"kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l\") pod \"redhat-operators-s6pts\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:26 crc kubenswrapper[4762]: I0308 01:21:26.765134 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:27 crc kubenswrapper[4762]: I0308 01:21:27.324064 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:27 crc kubenswrapper[4762]: W0308 01:21:27.328092 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862c630a_b1e9_47e8_9a79_8501eb1d9dfb.slice/crio-a1ed3bd2a179db871c194bbe5fd59e38638cf43eb23eb2672207237abd5cab6e WatchSource:0}: Error finding container a1ed3bd2a179db871c194bbe5fd59e38638cf43eb23eb2672207237abd5cab6e: Status 404 returned error can't find the container with id a1ed3bd2a179db871c194bbe5fd59e38638cf43eb23eb2672207237abd5cab6e Mar 08 01:21:27 crc kubenswrapper[4762]: I0308 01:21:27.707530 4762 generic.go:334] "Generic (PLEG): container finished" podID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerID="35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82" exitCode=0 Mar 08 01:21:27 crc kubenswrapper[4762]: I0308 01:21:27.707619 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerDied","Data":"35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82"} Mar 08 01:21:27 crc kubenswrapper[4762]: I0308 01:21:27.707814 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerStarted","Data":"a1ed3bd2a179db871c194bbe5fd59e38638cf43eb23eb2672207237abd5cab6e"} Mar 08 01:21:27 crc kubenswrapper[4762]: E0308 01:21:27.721210 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod862c630a_b1e9_47e8_9a79_8501eb1d9dfb.slice/crio-conmon-35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82.scope\": RecentStats: unable to find data in memory cache]" Mar 08 01:21:28 crc kubenswrapper[4762]: I0308 01:21:28.724967 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerStarted","Data":"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b"} Mar 08 01:21:33 crc kubenswrapper[4762]: I0308 01:21:33.044498 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:33 crc kubenswrapper[4762]: I0308 01:21:33.098665 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:33 crc kubenswrapper[4762]: I0308 01:21:33.299994 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:33 crc kubenswrapper[4762]: I0308 01:21:33.782224 4762 generic.go:334] "Generic (PLEG): container finished" podID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerID="209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b" exitCode=0 Mar 08 01:21:33 crc kubenswrapper[4762]: I0308 01:21:33.782575 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerDied","Data":"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b"} Mar 08 01:21:34 crc kubenswrapper[4762]: I0308 01:21:34.793244 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerStarted","Data":"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339"} Mar 08 01:21:34 crc kubenswrapper[4762]: I0308 01:21:34.793406 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-grnk9" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="registry-server" containerID="cri-o://a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d" gracePeriod=2 Mar 08 01:21:34 crc kubenswrapper[4762]: I0308 01:21:34.823132 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s6pts" podStartSLOduration=2.053269452 podStartE2EDuration="8.823109451s" podCreationTimestamp="2026-03-08 01:21:26 +0000 UTC" firstStartedPulling="2026-03-08 01:21:27.709477673 +0000 UTC m=+3509.183622017" lastFinishedPulling="2026-03-08 01:21:34.479317642 +0000 UTC m=+3515.953462016" observedRunningTime="2026-03-08 01:21:34.815475195 +0000 UTC m=+3516.289619569" watchObservedRunningTime="2026-03-08 01:21:34.823109451 +0000 UTC m=+3516.297253815" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.409931 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.589511 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z68d\" (UniqueName: \"kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d\") pod \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.589983 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities\") pod \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.590038 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content\") pod \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\" (UID: \"783184fc-35ee-4f18-b9c3-cbbf175ae5c2\") " Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.590898 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities" (OuterVolumeSpecName: "utilities") pod "783184fc-35ee-4f18-b9c3-cbbf175ae5c2" (UID: "783184fc-35ee-4f18-b9c3-cbbf175ae5c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.613062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d" (OuterVolumeSpecName: "kube-api-access-2z68d") pod "783184fc-35ee-4f18-b9c3-cbbf175ae5c2" (UID: "783184fc-35ee-4f18-b9c3-cbbf175ae5c2"). InnerVolumeSpecName "kube-api-access-2z68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.691860 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "783184fc-35ee-4f18-b9c3-cbbf175ae5c2" (UID: "783184fc-35ee-4f18-b9c3-cbbf175ae5c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.692515 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.692591 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.692653 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z68d\" (UniqueName: \"kubernetes.io/projected/783184fc-35ee-4f18-b9c3-cbbf175ae5c2-kube-api-access-2z68d\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.803743 4762 generic.go:334] "Generic (PLEG): container finished" podID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerID="a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d" exitCode=0 Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.803808 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerDied","Data":"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d"} Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.803832 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-grnk9" event={"ID":"783184fc-35ee-4f18-b9c3-cbbf175ae5c2","Type":"ContainerDied","Data":"b1197b80e6860bd880a72793061fa0697adc9873e746c9fd4fc9bd1a87e90281"} Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.803851 4762 scope.go:117] "RemoveContainer" containerID="a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.803965 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-grnk9" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.827441 4762 scope.go:117] "RemoveContainer" containerID="2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.841738 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.854952 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-grnk9"] Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.873478 4762 scope.go:117] "RemoveContainer" containerID="e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.941869 4762 scope.go:117] "RemoveContainer" containerID="a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d" Mar 08 01:21:35 crc kubenswrapper[4762]: E0308 01:21:35.942317 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d\": container with ID starting with a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d not found: ID does not exist" containerID="a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.942342 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d"} err="failed to get container status \"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d\": rpc error: code = NotFound desc = could not find container \"a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d\": container with ID starting with a4eb8fe5397f8b67dde9165ab86f18cdc271b711f6b31be0a13979f756b9ca1d not found: ID does not exist" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.942359 4762 scope.go:117] "RemoveContainer" containerID="2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d" Mar 08 01:21:35 crc kubenswrapper[4762]: E0308 01:21:35.942720 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d\": container with ID starting with 2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d not found: ID does not exist" containerID="2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.942743 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d"} err="failed to get container status \"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d\": rpc error: code = NotFound desc = could not find container \"2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d\": container with ID starting with 2efee497c5937cb099abd929b840a3f05d2ac1d09923369de5d4db06ec49514d not found: ID does not exist" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.942769 4762 scope.go:117] "RemoveContainer" containerID="e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83" Mar 08 01:21:35 crc kubenswrapper[4762]: E0308 01:21:35.943024 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83\": container with ID starting with e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83 not found: ID does not exist" containerID="e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83" Mar 08 01:21:35 crc kubenswrapper[4762]: I0308 01:21:35.943047 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83"} err="failed to get container status \"e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83\": rpc error: code = NotFound desc = could not find container \"e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83\": container with ID starting with e7dc0d195827da56950440c5ff77fdcf9e7bf2c7e43b1f24299b9f4549e03f83 not found: ID does not exist" Mar 08 01:21:36 crc kubenswrapper[4762]: I0308 01:21:36.765740 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:36 crc kubenswrapper[4762]: I0308 01:21:36.766224 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:37 crc kubenswrapper[4762]: I0308 01:21:37.263898 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:21:37 crc kubenswrapper[4762]: E0308 01:21:37.264211 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:21:37 crc kubenswrapper[4762]: I0308 01:21:37.283907 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" path="/var/lib/kubelet/pods/783184fc-35ee-4f18-b9c3-cbbf175ae5c2/volumes" Mar 08 01:21:37 crc kubenswrapper[4762]: I0308 01:21:37.834399 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s6pts" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" probeResult="failure" output=< Mar 08 01:21:37 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:21:37 crc kubenswrapper[4762]: > Mar 08 01:21:47 crc kubenswrapper[4762]: I0308 01:21:47.809128 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s6pts" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" probeResult="failure" output=< Mar 08 01:21:47 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:21:47 crc kubenswrapper[4762]: > Mar 08 01:21:51 crc kubenswrapper[4762]: I0308 01:21:51.266121 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:21:52 crc kubenswrapper[4762]: I0308 01:21:52.002312 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0"} Mar 08 01:21:56 crc kubenswrapper[4762]: I0308 01:21:56.826698 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:56 crc kubenswrapper[4762]: I0308 01:21:56.889388 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:57 crc kubenswrapper[4762]: I0308 01:21:57.628169 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.067398 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s6pts" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" containerID="cri-o://a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339" gracePeriod=2 Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.680707 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.832290 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js76l\" (UniqueName: \"kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l\") pod \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.832622 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content\") pod \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.832786 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities\") pod \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\" (UID: \"862c630a-b1e9-47e8-9a79-8501eb1d9dfb\") " Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.833498 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities" (OuterVolumeSpecName: "utilities") pod "862c630a-b1e9-47e8-9a79-8501eb1d9dfb" (UID: "862c630a-b1e9-47e8-9a79-8501eb1d9dfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.834088 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.842079 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l" (OuterVolumeSpecName: "kube-api-access-js76l") pod "862c630a-b1e9-47e8-9a79-8501eb1d9dfb" (UID: "862c630a-b1e9-47e8-9a79-8501eb1d9dfb"). InnerVolumeSpecName "kube-api-access-js76l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:21:58 crc kubenswrapper[4762]: I0308 01:21:58.936551 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js76l\" (UniqueName: \"kubernetes.io/projected/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-kube-api-access-js76l\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.007030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "862c630a-b1e9-47e8-9a79-8501eb1d9dfb" (UID: "862c630a-b1e9-47e8-9a79-8501eb1d9dfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.038412 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/862c630a-b1e9-47e8-9a79-8501eb1d9dfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.104074 4762 generic.go:334] "Generic (PLEG): container finished" podID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerID="a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339" exitCode=0 Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.104127 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerDied","Data":"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339"} Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.104158 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6pts" event={"ID":"862c630a-b1e9-47e8-9a79-8501eb1d9dfb","Type":"ContainerDied","Data":"a1ed3bd2a179db871c194bbe5fd59e38638cf43eb23eb2672207237abd5cab6e"} Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.104181 4762 scope.go:117] "RemoveContainer" containerID="a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.104207 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6pts" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.134901 4762 scope.go:117] "RemoveContainer" containerID="209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.167558 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.180809 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s6pts"] Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.181421 4762 scope.go:117] "RemoveContainer" containerID="35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.230127 4762 scope.go:117] "RemoveContainer" containerID="a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339" Mar 08 01:21:59 crc kubenswrapper[4762]: E0308 01:21:59.230799 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339\": container with ID starting with a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339 not found: ID does not exist" containerID="a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.230842 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339"} err="failed to get container status \"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339\": rpc error: code = NotFound desc = could not find container \"a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339\": container with ID starting with a63f6d330ed0a992b353674d0a88e793b0e3c9cacb8b68f1c2ff3c517a905339 not found: ID does not exist" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.230871 4762 scope.go:117] "RemoveContainer" containerID="209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b" Mar 08 01:21:59 crc kubenswrapper[4762]: E0308 01:21:59.231684 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b\": container with ID starting with 209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b not found: ID does not exist" containerID="209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.231734 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b"} err="failed to get container status \"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b\": rpc error: code = NotFound desc = could not find container \"209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b\": container with ID starting with 209381bccd5c41df7bd0463b7a72c1a58d72f119897129dc96e836dfef145d8b not found: ID does not exist" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.231777 4762 scope.go:117] "RemoveContainer" containerID="35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82" Mar 08 01:21:59 crc kubenswrapper[4762]: E0308 01:21:59.232414 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82\": container with ID starting with 35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82 not found: ID does not exist" containerID="35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.232456 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82"} err="failed to get container status \"35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82\": rpc error: code = NotFound desc = could not find container \"35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82\": container with ID starting with 35729c968b512bdb6eadfa00f28d587e34067bffe438d48e955090e6d60e0e82 not found: ID does not exist" Mar 08 01:21:59 crc kubenswrapper[4762]: I0308 01:21:59.284493 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" path="/var/lib/kubelet/pods/862c630a-b1e9-47e8-9a79-8501eb1d9dfb/volumes" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144023 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548882-62fnf"] Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144458 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="extract-content" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144486 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="extract-content" Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144501 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="extract-content" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144507 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="extract-content" Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144523 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="extract-utilities" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144529 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="extract-utilities" Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144541 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144546 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144567 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144572 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: E0308 01:22:00.144584 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="extract-utilities" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144589 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="extract-utilities" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144781 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="783184fc-35ee-4f18-b9c3-cbbf175ae5c2" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.144798 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="862c630a-b1e9-47e8-9a79-8501eb1d9dfb" containerName="registry-server" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.145497 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.147381 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.148237 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.148290 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.173454 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548882-62fnf"] Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.264171 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzglk\" (UniqueName: \"kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk\") pod \"auto-csr-approver-29548882-62fnf\" (UID: \"e57833b9-6496-4543-ba4c-6b596f787168\") " pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.365780 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzglk\" (UniqueName: \"kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk\") pod \"auto-csr-approver-29548882-62fnf\" (UID: \"e57833b9-6496-4543-ba4c-6b596f787168\") " pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.388189 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzglk\" (UniqueName: \"kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk\") pod \"auto-csr-approver-29548882-62fnf\" (UID: \"e57833b9-6496-4543-ba4c-6b596f787168\") " pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.476356 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.960492 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548882-62fnf"] Mar 08 01:22:00 crc kubenswrapper[4762]: W0308 01:22:00.967376 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode57833b9_6496_4543_ba4c_6b596f787168.slice/crio-405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139 WatchSource:0}: Error finding container 405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139: Status 404 returned error can't find the container with id 405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139 Mar 08 01:22:00 crc kubenswrapper[4762]: I0308 01:22:00.971673 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:22:01 crc kubenswrapper[4762]: I0308 01:22:01.124532 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548882-62fnf" event={"ID":"e57833b9-6496-4543-ba4c-6b596f787168","Type":"ContainerStarted","Data":"405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139"} Mar 08 01:22:03 crc kubenswrapper[4762]: I0308 01:22:03.150254 4762 generic.go:334] "Generic (PLEG): container finished" podID="e57833b9-6496-4543-ba4c-6b596f787168" containerID="3d27fd3bfd9edc564bcbf12ddef7d64a19fdff389c7be1e03c374eb169878b54" exitCode=0 Mar 08 01:22:03 crc kubenswrapper[4762]: I0308 01:22:03.150402 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548882-62fnf" event={"ID":"e57833b9-6496-4543-ba4c-6b596f787168","Type":"ContainerDied","Data":"3d27fd3bfd9edc564bcbf12ddef7d64a19fdff389c7be1e03c374eb169878b54"} Mar 08 01:22:04 crc kubenswrapper[4762]: I0308 01:22:04.729019 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:04 crc kubenswrapper[4762]: I0308 01:22:04.874788 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzglk\" (UniqueName: \"kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk\") pod \"e57833b9-6496-4543-ba4c-6b596f787168\" (UID: \"e57833b9-6496-4543-ba4c-6b596f787168\") " Mar 08 01:22:04 crc kubenswrapper[4762]: I0308 01:22:04.882357 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk" (OuterVolumeSpecName: "kube-api-access-rzglk") pod "e57833b9-6496-4543-ba4c-6b596f787168" (UID: "e57833b9-6496-4543-ba4c-6b596f787168"). InnerVolumeSpecName "kube-api-access-rzglk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:22:04 crc kubenswrapper[4762]: I0308 01:22:04.977556 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzglk\" (UniqueName: \"kubernetes.io/projected/e57833b9-6496-4543-ba4c-6b596f787168-kube-api-access-rzglk\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:05 crc kubenswrapper[4762]: I0308 01:22:05.174577 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548882-62fnf" event={"ID":"e57833b9-6496-4543-ba4c-6b596f787168","Type":"ContainerDied","Data":"405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139"} Mar 08 01:22:05 crc kubenswrapper[4762]: I0308 01:22:05.174964 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405b97265c45b19b8c7b0f7c30f9b82bb66595cea435325ae9de331ae82bd139" Mar 08 01:22:05 crc kubenswrapper[4762]: I0308 01:22:05.174642 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548882-62fnf" Mar 08 01:22:05 crc kubenswrapper[4762]: I0308 01:22:05.813066 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548876-bcx8c"] Mar 08 01:22:05 crc kubenswrapper[4762]: I0308 01:22:05.823472 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548876-bcx8c"] Mar 08 01:22:07 crc kubenswrapper[4762]: I0308 01:22:07.293082 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="583cac29-3bb2-4e52-802d-288ba8775619" path="/var/lib/kubelet/pods/583cac29-3bb2-4e52-802d-288ba8775619/volumes" Mar 08 01:22:27 crc kubenswrapper[4762]: I0308 01:22:27.448919 4762 generic.go:334] "Generic (PLEG): container finished" podID="6fdab820-9b0c-4bb9-b3aa-c00329fcf356" containerID="8fe2f6abaffd65986ee4eb162b5f1e5890f168c11f977aaa1b5a618a399e4378" exitCode=0 Mar 08 01:22:27 crc kubenswrapper[4762]: I0308 01:22:27.449003 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" event={"ID":"6fdab820-9b0c-4bb9-b3aa-c00329fcf356","Type":"ContainerDied","Data":"8fe2f6abaffd65986ee4eb162b5f1e5890f168c11f977aaa1b5a618a399e4378"} Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.142111 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258296 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258396 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txsns\" (UniqueName: \"kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258460 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258555 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258733 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.258881 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.259019 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam\") pod \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\" (UID: \"6fdab820-9b0c-4bb9-b3aa-c00329fcf356\") " Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.266165 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.266897 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph" (OuterVolumeSpecName: "ceph") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.267971 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns" (OuterVolumeSpecName: "kube-api-access-txsns") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "kube-api-access-txsns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.297120 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.310805 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.315489 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory" (OuterVolumeSpecName: "inventory") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.321583 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6fdab820-9b0c-4bb9-b3aa-c00329fcf356" (UID: "6fdab820-9b0c-4bb9-b3aa-c00329fcf356"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363443 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363624 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txsns\" (UniqueName: \"kubernetes.io/projected/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-kube-api-access-txsns\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363684 4762 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363702 4762 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363742 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363793 4762 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.363808 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6fdab820-9b0c-4bb9-b3aa-c00329fcf356-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.468143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" event={"ID":"6fdab820-9b0c-4bb9-b3aa-c00329fcf356","Type":"ContainerDied","Data":"e146073a603c7923a94e7980eabae9f1989d9db7fa64bfa1b002f358a4354631"} Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.468217 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e146073a603c7923a94e7980eabae9f1989d9db7fa64bfa1b002f358a4354631" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.468216 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.669736 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt"] Mar 08 01:22:29 crc kubenswrapper[4762]: E0308 01:22:29.670528 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57833b9-6496-4543-ba4c-6b596f787168" containerName="oc" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.670643 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57833b9-6496-4543-ba4c-6b596f787168" containerName="oc" Mar 08 01:22:29 crc kubenswrapper[4762]: E0308 01:22:29.670831 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fdab820-9b0c-4bb9-b3aa-c00329fcf356" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.670933 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdab820-9b0c-4bb9-b3aa-c00329fcf356" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.671319 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57833b9-6496-4543-ba4c-6b596f787168" containerName="oc" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.671449 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fdab820-9b0c-4bb9-b3aa-c00329fcf356" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.672710 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.674981 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.675235 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.675379 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.675521 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.675669 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.676028 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.691525 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt"] Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773359 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773424 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773517 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgjs\" (UniqueName: \"kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773635 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773699 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.773809 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876225 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876272 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876316 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlgjs\" (UniqueName: \"kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876343 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876365 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.876405 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.880951 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.881047 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.881614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.882018 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.884486 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:29 crc kubenswrapper[4762]: I0308 01:22:29.906326 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlgjs\" (UniqueName: \"kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:30 crc kubenswrapper[4762]: I0308 01:22:30.006881 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:22:30 crc kubenswrapper[4762]: I0308 01:22:30.648414 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt"] Mar 08 01:22:31 crc kubenswrapper[4762]: I0308 01:22:31.513518 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" event={"ID":"ff17c293-0613-494b-a138-a29de53bb297","Type":"ContainerStarted","Data":"368e4180617569d2e65d67b6cba132dc7dba4672f400d8ce729c829af808872c"} Mar 08 01:22:31 crc kubenswrapper[4762]: I0308 01:22:31.513926 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" event={"ID":"ff17c293-0613-494b-a138-a29de53bb297","Type":"ContainerStarted","Data":"b94b38f0f7aa450dd995b5f5f99b6113739331e31a2e26c9fae96cbb59e42041"} Mar 08 01:22:31 crc kubenswrapper[4762]: I0308 01:22:31.566295 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" podStartSLOduration=2.1336695199999998 podStartE2EDuration="2.566269866s" podCreationTimestamp="2026-03-08 01:22:29 +0000 UTC" firstStartedPulling="2026-03-08 01:22:30.656132576 +0000 UTC m=+3572.130276920" lastFinishedPulling="2026-03-08 01:22:31.088732912 +0000 UTC m=+3572.562877266" observedRunningTime="2026-03-08 01:22:31.538179888 +0000 UTC m=+3573.012324242" watchObservedRunningTime="2026-03-08 01:22:31.566269866 +0000 UTC m=+3573.040414240" Mar 08 01:22:31 crc kubenswrapper[4762]: I0308 01:22:31.782909 4762 scope.go:117] "RemoveContainer" containerID="23a2f6264104b4c44953adfe568ec3a7cbe1f07634a04f961a3aa5aa6a5c2560" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.398178 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.403396 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.414098 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.447408 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.447479 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gf62\" (UniqueName: \"kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.447607 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.548872 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.548936 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gf62\" (UniqueName: \"kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.549037 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.549521 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.549542 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.576928 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gf62\" (UniqueName: \"kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62\") pod \"redhat-marketplace-9s77m\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:05 crc kubenswrapper[4762]: I0308 01:23:05.751360 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:06 crc kubenswrapper[4762]: I0308 01:23:06.283076 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:06 crc kubenswrapper[4762]: I0308 01:23:06.917551 4762 generic.go:334] "Generic (PLEG): container finished" podID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerID="f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972" exitCode=0 Mar 08 01:23:06 crc kubenswrapper[4762]: I0308 01:23:06.917953 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerDied","Data":"f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972"} Mar 08 01:23:06 crc kubenswrapper[4762]: I0308 01:23:06.917994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerStarted","Data":"5e0ae676bfe04ecba8db9de593706f078c4886a2bd55d0019a86d495d47522a7"} Mar 08 01:23:07 crc kubenswrapper[4762]: I0308 01:23:07.927968 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerStarted","Data":"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34"} Mar 08 01:23:08 crc kubenswrapper[4762]: I0308 01:23:08.940344 4762 generic.go:334] "Generic (PLEG): container finished" podID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerID="c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34" exitCode=0 Mar 08 01:23:08 crc kubenswrapper[4762]: I0308 01:23:08.940476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerDied","Data":"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34"} Mar 08 01:23:09 crc kubenswrapper[4762]: I0308 01:23:09.953165 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerStarted","Data":"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea"} Mar 08 01:23:09 crc kubenswrapper[4762]: I0308 01:23:09.983190 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9s77m" podStartSLOduration=2.561847573 podStartE2EDuration="4.983171086s" podCreationTimestamp="2026-03-08 01:23:05 +0000 UTC" firstStartedPulling="2026-03-08 01:23:06.920542828 +0000 UTC m=+3608.394687212" lastFinishedPulling="2026-03-08 01:23:09.341866351 +0000 UTC m=+3610.816010725" observedRunningTime="2026-03-08 01:23:09.980911357 +0000 UTC m=+3611.455055701" watchObservedRunningTime="2026-03-08 01:23:09.983171086 +0000 UTC m=+3611.457315440" Mar 08 01:23:15 crc kubenswrapper[4762]: I0308 01:23:15.751913 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:15 crc kubenswrapper[4762]: I0308 01:23:15.752566 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:15 crc kubenswrapper[4762]: I0308 01:23:15.842191 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:16 crc kubenswrapper[4762]: I0308 01:23:16.082859 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:16 crc kubenswrapper[4762]: I0308 01:23:16.152418 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.061903 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9s77m" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="registry-server" containerID="cri-o://7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea" gracePeriod=2 Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.712648 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.888161 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities\") pod \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.888706 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gf62\" (UniqueName: \"kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62\") pod \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.888817 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content\") pod \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\" (UID: \"795686fc-b0f8-45a2-a778-158bb7c8f5c9\") " Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.889932 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities" (OuterVolumeSpecName: "utilities") pod "795686fc-b0f8-45a2-a778-158bb7c8f5c9" (UID: "795686fc-b0f8-45a2-a778-158bb7c8f5c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.910659 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62" (OuterVolumeSpecName: "kube-api-access-2gf62") pod "795686fc-b0f8-45a2-a778-158bb7c8f5c9" (UID: "795686fc-b0f8-45a2-a778-158bb7c8f5c9"). InnerVolumeSpecName "kube-api-access-2gf62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.927867 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "795686fc-b0f8-45a2-a778-158bb7c8f5c9" (UID: "795686fc-b0f8-45a2-a778-158bb7c8f5c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.992448 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.992492 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gf62\" (UniqueName: \"kubernetes.io/projected/795686fc-b0f8-45a2-a778-158bb7c8f5c9-kube-api-access-2gf62\") on node \"crc\" DevicePath \"\"" Mar 08 01:23:18 crc kubenswrapper[4762]: I0308 01:23:18.992509 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/795686fc-b0f8-45a2-a778-158bb7c8f5c9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.076955 4762 generic.go:334] "Generic (PLEG): container finished" podID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerID="7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea" exitCode=0 Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.077039 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9s77m" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.077040 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerDied","Data":"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea"} Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.077206 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9s77m" event={"ID":"795686fc-b0f8-45a2-a778-158bb7c8f5c9","Type":"ContainerDied","Data":"5e0ae676bfe04ecba8db9de593706f078c4886a2bd55d0019a86d495d47522a7"} Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.077245 4762 scope.go:117] "RemoveContainer" containerID="7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.113514 4762 scope.go:117] "RemoveContainer" containerID="c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.149424 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.160118 4762 scope.go:117] "RemoveContainer" containerID="f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.177684 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9s77m"] Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.217750 4762 scope.go:117] "RemoveContainer" containerID="7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea" Mar 08 01:23:19 crc kubenswrapper[4762]: E0308 01:23:19.218430 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea\": container with ID starting with 7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea not found: ID does not exist" containerID="7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.218495 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea"} err="failed to get container status \"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea\": rpc error: code = NotFound desc = could not find container \"7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea\": container with ID starting with 7342ca522c27e5382f787df9a6a3be2348865191fda68a420c9b87f20f2ba2ea not found: ID does not exist" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.218533 4762 scope.go:117] "RemoveContainer" containerID="c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34" Mar 08 01:23:19 crc kubenswrapper[4762]: E0308 01:23:19.219020 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34\": container with ID starting with c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34 not found: ID does not exist" containerID="c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.219102 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34"} err="failed to get container status \"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34\": rpc error: code = NotFound desc = could not find container \"c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34\": container with ID starting with c961b6121b428acbd8040961cfecf1b65c93b9d21c877b6a1cb1c4d60d21ae34 not found: ID does not exist" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.219160 4762 scope.go:117] "RemoveContainer" containerID="f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972" Mar 08 01:23:19 crc kubenswrapper[4762]: E0308 01:23:19.219584 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972\": container with ID starting with f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972 not found: ID does not exist" containerID="f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.219673 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972"} err="failed to get container status \"f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972\": rpc error: code = NotFound desc = could not find container \"f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972\": container with ID starting with f78fb1326e3d1c81e65e3c245066d28ebed072aa7d440b97ca8627dae9b8f972 not found: ID does not exist" Mar 08 01:23:19 crc kubenswrapper[4762]: I0308 01:23:19.279235 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" path="/var/lib/kubelet/pods/795686fc-b0f8-45a2-a778-158bb7c8f5c9/volumes" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.174617 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548884-5289h"] Mar 08 01:24:00 crc kubenswrapper[4762]: E0308 01:24:00.176246 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="extract-content" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.176273 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="extract-content" Mar 08 01:24:00 crc kubenswrapper[4762]: E0308 01:24:00.176317 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="registry-server" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.176330 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="registry-server" Mar 08 01:24:00 crc kubenswrapper[4762]: E0308 01:24:00.176351 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="extract-utilities" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.176365 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="extract-utilities" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.176786 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="795686fc-b0f8-45a2-a778-158bb7c8f5c9" containerName="registry-server" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.178003 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.182330 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.190580 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.192455 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.212513 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548884-5289h"] Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.312148 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mthbc\" (UniqueName: \"kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc\") pod \"auto-csr-approver-29548884-5289h\" (UID: \"8f5f9ac4-2a57-4024-a546-3de917d6528b\") " pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.414685 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mthbc\" (UniqueName: \"kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc\") pod \"auto-csr-approver-29548884-5289h\" (UID: \"8f5f9ac4-2a57-4024-a546-3de917d6528b\") " pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.449571 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mthbc\" (UniqueName: \"kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc\") pod \"auto-csr-approver-29548884-5289h\" (UID: \"8f5f9ac4-2a57-4024-a546-3de917d6528b\") " pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.513899 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:00 crc kubenswrapper[4762]: I0308 01:24:00.851276 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548884-5289h"] Mar 08 01:24:01 crc kubenswrapper[4762]: I0308 01:24:01.650601 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548884-5289h" event={"ID":"8f5f9ac4-2a57-4024-a546-3de917d6528b","Type":"ContainerStarted","Data":"8b1e227ecb0bfb2ece3f564fed93dc6fc405b4be5d97e4d8cb91b7b54bd9434d"} Mar 08 01:24:02 crc kubenswrapper[4762]: I0308 01:24:02.669325 4762 generic.go:334] "Generic (PLEG): container finished" podID="8f5f9ac4-2a57-4024-a546-3de917d6528b" containerID="b9720105288e425e1c661fbea9fbcb99f3198dba71066d187a11bca95c80bea7" exitCode=0 Mar 08 01:24:02 crc kubenswrapper[4762]: I0308 01:24:02.669418 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548884-5289h" event={"ID":"8f5f9ac4-2a57-4024-a546-3de917d6528b","Type":"ContainerDied","Data":"b9720105288e425e1c661fbea9fbcb99f3198dba71066d187a11bca95c80bea7"} Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.073928 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.219394 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mthbc\" (UniqueName: \"kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc\") pod \"8f5f9ac4-2a57-4024-a546-3de917d6528b\" (UID: \"8f5f9ac4-2a57-4024-a546-3de917d6528b\") " Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.228441 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc" (OuterVolumeSpecName: "kube-api-access-mthbc") pod "8f5f9ac4-2a57-4024-a546-3de917d6528b" (UID: "8f5f9ac4-2a57-4024-a546-3de917d6528b"). InnerVolumeSpecName "kube-api-access-mthbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.322491 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mthbc\" (UniqueName: \"kubernetes.io/projected/8f5f9ac4-2a57-4024-a546-3de917d6528b-kube-api-access-mthbc\") on node \"crc\" DevicePath \"\"" Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.690219 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548884-5289h" event={"ID":"8f5f9ac4-2a57-4024-a546-3de917d6528b","Type":"ContainerDied","Data":"8b1e227ecb0bfb2ece3f564fed93dc6fc405b4be5d97e4d8cb91b7b54bd9434d"} Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.690593 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b1e227ecb0bfb2ece3f564fed93dc6fc405b4be5d97e4d8cb91b7b54bd9434d" Mar 08 01:24:04 crc kubenswrapper[4762]: I0308 01:24:04.690294 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548884-5289h" Mar 08 01:24:05 crc kubenswrapper[4762]: I0308 01:24:05.177583 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548878-j8mv8"] Mar 08 01:24:05 crc kubenswrapper[4762]: I0308 01:24:05.193042 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548878-j8mv8"] Mar 08 01:24:05 crc kubenswrapper[4762]: I0308 01:24:05.285331 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0de1641-d236-4654-b681-56222a8d26aa" path="/var/lib/kubelet/pods/a0de1641-d236-4654-b681-56222a8d26aa/volumes" Mar 08 01:24:12 crc kubenswrapper[4762]: I0308 01:24:12.852203 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:24:12 crc kubenswrapper[4762]: I0308 01:24:12.853136 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:24:31 crc kubenswrapper[4762]: I0308 01:24:31.944073 4762 scope.go:117] "RemoveContainer" containerID="9ea49bdc202fd68cf5b395dbfe9d8bb5fa8c96b77286cd15c85f1911407fd1ce" Mar 08 01:24:42 crc kubenswrapper[4762]: I0308 01:24:42.852312 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:24:42 crc kubenswrapper[4762]: I0308 01:24:42.852832 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:25:12 crc kubenswrapper[4762]: I0308 01:25:12.851400 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:25:12 crc kubenswrapper[4762]: I0308 01:25:12.852068 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:25:12 crc kubenswrapper[4762]: I0308 01:25:12.852139 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:25:12 crc kubenswrapper[4762]: I0308 01:25:12.853245 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:25:12 crc kubenswrapper[4762]: I0308 01:25:12.853329 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0" gracePeriod=600 Mar 08 01:25:13 crc kubenswrapper[4762]: I0308 01:25:13.540139 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0" exitCode=0 Mar 08 01:25:13 crc kubenswrapper[4762]: I0308 01:25:13.540239 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0"} Mar 08 01:25:13 crc kubenswrapper[4762]: I0308 01:25:13.540489 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725"} Mar 08 01:25:13 crc kubenswrapper[4762]: I0308 01:25:13.540514 4762 scope.go:117] "RemoveContainer" containerID="283e4552c38156a1626ca65e6f7b29b7a78451a1e6aff33a1c87752f26b46499" Mar 08 01:25:28 crc kubenswrapper[4762]: I0308 01:25:28.772741 4762 generic.go:334] "Generic (PLEG): container finished" podID="ff17c293-0613-494b-a138-a29de53bb297" containerID="368e4180617569d2e65d67b6cba132dc7dba4672f400d8ce729c829af808872c" exitCode=0 Mar 08 01:25:28 crc kubenswrapper[4762]: I0308 01:25:28.772828 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" event={"ID":"ff17c293-0613-494b-a138-a29de53bb297","Type":"ContainerDied","Data":"368e4180617569d2e65d67b6cba132dc7dba4672f400d8ce729c829af808872c"} Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.345264 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.455942 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.456063 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.456190 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlgjs\" (UniqueName: \"kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.456247 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.456284 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.456402 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam\") pod \"ff17c293-0613-494b-a138-a29de53bb297\" (UID: \"ff17c293-0613-494b-a138-a29de53bb297\") " Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.461954 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs" (OuterVolumeSpecName: "kube-api-access-zlgjs") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "kube-api-access-zlgjs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.463712 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph" (OuterVolumeSpecName: "ceph") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.465428 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.496382 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory" (OuterVolumeSpecName: "inventory") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.498521 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.508002 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "ff17c293-0613-494b-a138-a29de53bb297" (UID: "ff17c293-0613-494b-a138-a29de53bb297"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559036 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlgjs\" (UniqueName: \"kubernetes.io/projected/ff17c293-0613-494b-a138-a29de53bb297-kube-api-access-zlgjs\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559086 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559105 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559125 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559178 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.559196 4762 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff17c293-0613-494b-a138-a29de53bb297-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.802593 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" event={"ID":"ff17c293-0613-494b-a138-a29de53bb297","Type":"ContainerDied","Data":"b94b38f0f7aa450dd995b5f5f99b6113739331e31a2e26c9fae96cbb59e42041"} Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.802977 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b94b38f0f7aa450dd995b5f5f99b6113739331e31a2e26c9fae96cbb59e42041" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.802946 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.943922 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz"] Mar 08 01:25:30 crc kubenswrapper[4762]: E0308 01:25:30.944927 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5f9ac4-2a57-4024-a546-3de917d6528b" containerName="oc" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.944961 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5f9ac4-2a57-4024-a546-3de917d6528b" containerName="oc" Mar 08 01:25:30 crc kubenswrapper[4762]: E0308 01:25:30.945020 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff17c293-0613-494b-a138-a29de53bb297" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.945032 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff17c293-0613-494b-a138-a29de53bb297" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.945295 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff17c293-0613-494b-a138-a29de53bb297" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.945341 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5f9ac4-2a57-4024-a546-3de917d6528b" containerName="oc" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.946264 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.948599 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.950317 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.951057 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.951118 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.951299 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.952109 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.952722 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.953433 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.954196 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:25:30 crc kubenswrapper[4762]: I0308 01:25:30.955893 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz"] Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069545 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069613 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069649 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069777 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069810 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069881 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.069925 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npgvj\" (UniqueName: \"kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.070089 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.070323 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.070429 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.070508 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.070536 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172360 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172467 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172525 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172562 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172630 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172701 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172736 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172850 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172898 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.172985 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.173045 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npgvj\" (UniqueName: \"kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.173118 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.174725 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.174882 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.177600 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.177962 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.178281 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.178545 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.178563 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.178944 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.179487 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.180086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.180290 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.181085 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.192202 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npgvj\" (UniqueName: \"kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.284402 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:25:31 crc kubenswrapper[4762]: I0308 01:25:31.878302 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz"] Mar 08 01:25:31 crc kubenswrapper[4762]: W0308 01:25:31.882066 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41caed5b_cdb9_492f_bb7b_6b799e1811e0.slice/crio-31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a WatchSource:0}: Error finding container 31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a: Status 404 returned error can't find the container with id 31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a Mar 08 01:25:32 crc kubenswrapper[4762]: I0308 01:25:32.826586 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" event={"ID":"41caed5b-cdb9-492f-bb7b-6b799e1811e0","Type":"ContainerStarted","Data":"89ab07d3b7e65fe24fff0e782f3af60631e0ee91f646e83d619fe4dfcff8aa9f"} Mar 08 01:25:32 crc kubenswrapper[4762]: I0308 01:25:32.827029 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" event={"ID":"41caed5b-cdb9-492f-bb7b-6b799e1811e0","Type":"ContainerStarted","Data":"31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a"} Mar 08 01:25:32 crc kubenswrapper[4762]: I0308 01:25:32.869595 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" podStartSLOduration=2.4036757570000002 podStartE2EDuration="2.869570312s" podCreationTimestamp="2026-03-08 01:25:30 +0000 UTC" firstStartedPulling="2026-03-08 01:25:31.885699923 +0000 UTC m=+3753.359844277" lastFinishedPulling="2026-03-08 01:25:32.351594448 +0000 UTC m=+3753.825738832" observedRunningTime="2026-03-08 01:25:32.857296413 +0000 UTC m=+3754.331440787" watchObservedRunningTime="2026-03-08 01:25:32.869570312 +0000 UTC m=+3754.343714696" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.175117 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548886-zxl2p"] Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.179430 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.185495 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.186063 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.186425 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.201276 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548886-zxl2p"] Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.288119 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69klp\" (UniqueName: \"kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp\") pod \"auto-csr-approver-29548886-zxl2p\" (UID: \"d78d4dd3-c2b7-43bc-b431-250d47800781\") " pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.392940 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69klp\" (UniqueName: \"kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp\") pod \"auto-csr-approver-29548886-zxl2p\" (UID: \"d78d4dd3-c2b7-43bc-b431-250d47800781\") " pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.429614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69klp\" (UniqueName: \"kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp\") pod \"auto-csr-approver-29548886-zxl2p\" (UID: \"d78d4dd3-c2b7-43bc-b431-250d47800781\") " pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:00 crc kubenswrapper[4762]: I0308 01:26:00.522628 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:01 crc kubenswrapper[4762]: W0308 01:26:01.088622 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78d4dd3_c2b7_43bc_b431_250d47800781.slice/crio-07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708 WatchSource:0}: Error finding container 07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708: Status 404 returned error can't find the container with id 07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708 Mar 08 01:26:01 crc kubenswrapper[4762]: I0308 01:26:01.092112 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548886-zxl2p"] Mar 08 01:26:01 crc kubenswrapper[4762]: I0308 01:26:01.210035 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" event={"ID":"d78d4dd3-c2b7-43bc-b431-250d47800781","Type":"ContainerStarted","Data":"07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708"} Mar 08 01:26:03 crc kubenswrapper[4762]: I0308 01:26:03.243759 4762 generic.go:334] "Generic (PLEG): container finished" podID="d78d4dd3-c2b7-43bc-b431-250d47800781" containerID="b452f6792752eeba8b15844ec80e34f61b799077983c0db1e95ca8405c767c7b" exitCode=0 Mar 08 01:26:03 crc kubenswrapper[4762]: I0308 01:26:03.243901 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" event={"ID":"d78d4dd3-c2b7-43bc-b431-250d47800781","Type":"ContainerDied","Data":"b452f6792752eeba8b15844ec80e34f61b799077983c0db1e95ca8405c767c7b"} Mar 08 01:26:04 crc kubenswrapper[4762]: I0308 01:26:04.703899 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:04 crc kubenswrapper[4762]: I0308 01:26:04.796315 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69klp\" (UniqueName: \"kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp\") pod \"d78d4dd3-c2b7-43bc-b431-250d47800781\" (UID: \"d78d4dd3-c2b7-43bc-b431-250d47800781\") " Mar 08 01:26:04 crc kubenswrapper[4762]: I0308 01:26:04.803057 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp" (OuterVolumeSpecName: "kube-api-access-69klp") pod "d78d4dd3-c2b7-43bc-b431-250d47800781" (UID: "d78d4dd3-c2b7-43bc-b431-250d47800781"). InnerVolumeSpecName "kube-api-access-69klp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:26:04 crc kubenswrapper[4762]: I0308 01:26:04.898697 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69klp\" (UniqueName: \"kubernetes.io/projected/d78d4dd3-c2b7-43bc-b431-250d47800781-kube-api-access-69klp\") on node \"crc\" DevicePath \"\"" Mar 08 01:26:05 crc kubenswrapper[4762]: I0308 01:26:05.286310 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" event={"ID":"d78d4dd3-c2b7-43bc-b431-250d47800781","Type":"ContainerDied","Data":"07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708"} Mar 08 01:26:05 crc kubenswrapper[4762]: I0308 01:26:05.286657 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07213ce6cceca809b8b181b42cde094fe713d425e2b06d45f99baf688c872708" Mar 08 01:26:05 crc kubenswrapper[4762]: I0308 01:26:05.286373 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548886-zxl2p" Mar 08 01:26:05 crc kubenswrapper[4762]: I0308 01:26:05.779526 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548880-zg8tk"] Mar 08 01:26:05 crc kubenswrapper[4762]: I0308 01:26:05.796002 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548880-zg8tk"] Mar 08 01:26:07 crc kubenswrapper[4762]: I0308 01:26:07.279167 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="206cd4a3-f2b1-4097-bc50-14a4c2042c01" path="/var/lib/kubelet/pods/206cd4a3-f2b1-4097-bc50-14a4c2042c01/volumes" Mar 08 01:26:32 crc kubenswrapper[4762]: I0308 01:26:32.093589 4762 scope.go:117] "RemoveContainer" containerID="2fbd0346c0abe89246080d3b86033c92a913f9afa97b81e987b1e55053bf3d13" Mar 08 01:27:42 crc kubenswrapper[4762]: I0308 01:27:42.852078 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:27:42 crc kubenswrapper[4762]: I0308 01:27:42.852791 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.149007 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548888-ptgtj"] Mar 08 01:28:00 crc kubenswrapper[4762]: E0308 01:28:00.154164 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78d4dd3-c2b7-43bc-b431-250d47800781" containerName="oc" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.154366 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78d4dd3-c2b7-43bc-b431-250d47800781" containerName="oc" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.156808 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78d4dd3-c2b7-43bc-b431-250d47800781" containerName="oc" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.157930 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.168002 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.168178 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.168631 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.170046 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548888-ptgtj"] Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.258138 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9b86\" (UniqueName: \"kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86\") pod \"auto-csr-approver-29548888-ptgtj\" (UID: \"96aaeade-1629-4d49-8cbe-16d893443bbb\") " pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.360365 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9b86\" (UniqueName: \"kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86\") pod \"auto-csr-approver-29548888-ptgtj\" (UID: \"96aaeade-1629-4d49-8cbe-16d893443bbb\") " pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.384086 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9b86\" (UniqueName: \"kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86\") pod \"auto-csr-approver-29548888-ptgtj\" (UID: \"96aaeade-1629-4d49-8cbe-16d893443bbb\") " pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:00 crc kubenswrapper[4762]: I0308 01:28:00.498808 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:01 crc kubenswrapper[4762]: I0308 01:28:01.013459 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548888-ptgtj"] Mar 08 01:28:01 crc kubenswrapper[4762]: I0308 01:28:01.027597 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:28:01 crc kubenswrapper[4762]: I0308 01:28:01.677015 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" event={"ID":"96aaeade-1629-4d49-8cbe-16d893443bbb","Type":"ContainerStarted","Data":"22ddaa926d9b44dbc222dce4ff851a05c74c20c3cef40cfd1cf4c06d1f1c1e8a"} Mar 08 01:28:02 crc kubenswrapper[4762]: I0308 01:28:02.689419 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" event={"ID":"96aaeade-1629-4d49-8cbe-16d893443bbb","Type":"ContainerStarted","Data":"d5f1528e612983e9f39017a074eace85e01e565c27b83c27714b130adc05f948"} Mar 08 01:28:02 crc kubenswrapper[4762]: I0308 01:28:02.710019 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" podStartSLOduration=1.532002227 podStartE2EDuration="2.710002296s" podCreationTimestamp="2026-03-08 01:28:00 +0000 UTC" firstStartedPulling="2026-03-08 01:28:01.027331364 +0000 UTC m=+3902.501475708" lastFinishedPulling="2026-03-08 01:28:02.205331433 +0000 UTC m=+3903.679475777" observedRunningTime="2026-03-08 01:28:02.704703045 +0000 UTC m=+3904.178847409" watchObservedRunningTime="2026-03-08 01:28:02.710002296 +0000 UTC m=+3904.184146660" Mar 08 01:28:03 crc kubenswrapper[4762]: I0308 01:28:03.705639 4762 generic.go:334] "Generic (PLEG): container finished" podID="96aaeade-1629-4d49-8cbe-16d893443bbb" containerID="d5f1528e612983e9f39017a074eace85e01e565c27b83c27714b130adc05f948" exitCode=0 Mar 08 01:28:03 crc kubenswrapper[4762]: I0308 01:28:03.705783 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" event={"ID":"96aaeade-1629-4d49-8cbe-16d893443bbb","Type":"ContainerDied","Data":"d5f1528e612983e9f39017a074eace85e01e565c27b83c27714b130adc05f948"} Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.190006 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.311111 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9b86\" (UniqueName: \"kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86\") pod \"96aaeade-1629-4d49-8cbe-16d893443bbb\" (UID: \"96aaeade-1629-4d49-8cbe-16d893443bbb\") " Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.316560 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86" (OuterVolumeSpecName: "kube-api-access-z9b86") pod "96aaeade-1629-4d49-8cbe-16d893443bbb" (UID: "96aaeade-1629-4d49-8cbe-16d893443bbb"). InnerVolumeSpecName "kube-api-access-z9b86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.413682 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9b86\" (UniqueName: \"kubernetes.io/projected/96aaeade-1629-4d49-8cbe-16d893443bbb-kube-api-access-z9b86\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.727509 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" event={"ID":"96aaeade-1629-4d49-8cbe-16d893443bbb","Type":"ContainerDied","Data":"22ddaa926d9b44dbc222dce4ff851a05c74c20c3cef40cfd1cf4c06d1f1c1e8a"} Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.727547 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22ddaa926d9b44dbc222dce4ff851a05c74c20c3cef40cfd1cf4c06d1f1c1e8a" Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.727594 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548888-ptgtj" Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.793229 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548882-62fnf"] Mar 08 01:28:05 crc kubenswrapper[4762]: I0308 01:28:05.803615 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548882-62fnf"] Mar 08 01:28:07 crc kubenswrapper[4762]: I0308 01:28:07.277999 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57833b9-6496-4543-ba4c-6b596f787168" path="/var/lib/kubelet/pods/e57833b9-6496-4543-ba4c-6b596f787168/volumes" Mar 08 01:28:12 crc kubenswrapper[4762]: I0308 01:28:12.851434 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:28:12 crc kubenswrapper[4762]: I0308 01:28:12.852187 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:28:32 crc kubenswrapper[4762]: I0308 01:28:32.257073 4762 scope.go:117] "RemoveContainer" containerID="3d27fd3bfd9edc564bcbf12ddef7d64a19fdff389c7be1e03c374eb169878b54" Mar 08 01:28:42 crc kubenswrapper[4762]: I0308 01:28:42.851912 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:28:42 crc kubenswrapper[4762]: I0308 01:28:42.852746 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:28:42 crc kubenswrapper[4762]: I0308 01:28:42.852899 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:28:42 crc kubenswrapper[4762]: I0308 01:28:42.854094 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:28:42 crc kubenswrapper[4762]: I0308 01:28:42.854213 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" gracePeriod=600 Mar 08 01:28:42 crc kubenswrapper[4762]: E0308 01:28:42.983856 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:28:43 crc kubenswrapper[4762]: I0308 01:28:43.131003 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" exitCode=0 Mar 08 01:28:43 crc kubenswrapper[4762]: I0308 01:28:43.131055 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725"} Mar 08 01:28:43 crc kubenswrapper[4762]: I0308 01:28:43.131094 4762 scope.go:117] "RemoveContainer" containerID="ac23d3f891dec3c1df5f19e5bcfc8231d42d9eccb5121b7380d9b3a1de7360b0" Mar 08 01:28:43 crc kubenswrapper[4762]: I0308 01:28:43.132028 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:28:43 crc kubenswrapper[4762]: E0308 01:28:43.133315 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:28:55 crc kubenswrapper[4762]: I0308 01:28:55.277813 4762 generic.go:334] "Generic (PLEG): container finished" podID="41caed5b-cdb9-492f-bb7b-6b799e1811e0" containerID="89ab07d3b7e65fe24fff0e782f3af60631e0ee91f646e83d619fe4dfcff8aa9f" exitCode=0 Mar 08 01:28:55 crc kubenswrapper[4762]: I0308 01:28:55.277883 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" event={"ID":"41caed5b-cdb9-492f-bb7b-6b799e1811e0","Type":"ContainerDied","Data":"89ab07d3b7e65fe24fff0e782f3af60631e0ee91f646e83d619fe4dfcff8aa9f"} Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.834804 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.981658 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npgvj\" (UniqueName: \"kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.981844 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.981879 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982002 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982728 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982796 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982823 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982858 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982878 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982896 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982942 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.982978 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.983041 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle\") pod \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\" (UID: \"41caed5b-cdb9-492f-bb7b-6b799e1811e0\") " Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.988432 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj" (OuterVolumeSpecName: "kube-api-access-npgvj") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "kube-api-access-npgvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.989091 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph" (OuterVolumeSpecName: "ceph") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:56 crc kubenswrapper[4762]: I0308 01:28:56.990864 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.014811 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.022690 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.026045 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory" (OuterVolumeSpecName: "inventory") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.029727 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.033130 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.034339 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.040394 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.047482 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.049368 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.079042 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "41caed5b-cdb9-492f-bb7b-6b799e1811e0" (UID: "41caed5b-cdb9-492f-bb7b-6b799e1811e0"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084325 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084359 4762 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084371 4762 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084381 4762 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084390 4762 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084399 4762 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084408 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084452 4762 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084461 4762 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084470 4762 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084480 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npgvj\" (UniqueName: \"kubernetes.io/projected/41caed5b-cdb9-492f-bb7b-6b799e1811e0-kube-api-access-npgvj\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084520 4762 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.084530 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/41caed5b-cdb9-492f-bb7b-6b799e1811e0-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.307388 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" event={"ID":"41caed5b-cdb9-492f-bb7b-6b799e1811e0","Type":"ContainerDied","Data":"31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a"} Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.307430 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a2fca9c75efa460e1b2a1b6ec39038767597d13c7ae55029280b7ba764151a" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.307494 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.431730 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc"] Mar 08 01:28:57 crc kubenswrapper[4762]: E0308 01:28:57.436211 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96aaeade-1629-4d49-8cbe-16d893443bbb" containerName="oc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.436243 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="96aaeade-1629-4d49-8cbe-16d893443bbb" containerName="oc" Mar 08 01:28:57 crc kubenswrapper[4762]: E0308 01:28:57.436271 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41caed5b-cdb9-492f-bb7b-6b799e1811e0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.436280 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="41caed5b-cdb9-492f-bb7b-6b799e1811e0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.436525 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="41caed5b-cdb9-492f-bb7b-6b799e1811e0" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.436551 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="96aaeade-1629-4d49-8cbe-16d893443bbb" containerName="oc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.437457 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.440100 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.440441 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.440604 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.442363 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.442421 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.446165 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc"] Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.450490 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501333 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501491 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501532 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501565 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501669 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501752 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4r4\" (UniqueName: \"kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501865 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.501985 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.603960 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.604320 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.604533 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.604689 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.604814 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.604930 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.605108 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.605245 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4r4\" (UniqueName: \"kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.612365 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.612414 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.614166 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.614214 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.614469 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.614584 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.617197 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.625951 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4r4\" (UniqueName: \"kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:57 crc kubenswrapper[4762]: I0308 01:28:57.768154 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:28:58 crc kubenswrapper[4762]: I0308 01:28:58.263554 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:28:58 crc kubenswrapper[4762]: E0308 01:28:58.264487 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:28:58 crc kubenswrapper[4762]: I0308 01:28:58.395589 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc"] Mar 08 01:28:59 crc kubenswrapper[4762]: I0308 01:28:59.329325 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" event={"ID":"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc","Type":"ContainerStarted","Data":"88e5bbc2699cfbe86af7b368bb3d08b18e65011b381ca60ea852b5745ada6abf"} Mar 08 01:28:59 crc kubenswrapper[4762]: I0308 01:28:59.330045 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" event={"ID":"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc","Type":"ContainerStarted","Data":"c362f601b559d582317966d986adc4f197cd026675205ea90049ef48bae1035a"} Mar 08 01:28:59 crc kubenswrapper[4762]: I0308 01:28:59.368736 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" podStartSLOduration=1.820281263 podStartE2EDuration="2.368696862s" podCreationTimestamp="2026-03-08 01:28:57 +0000 UTC" firstStartedPulling="2026-03-08 01:28:58.395400446 +0000 UTC m=+3959.869544830" lastFinishedPulling="2026-03-08 01:28:58.943816045 +0000 UTC m=+3960.417960429" observedRunningTime="2026-03-08 01:28:59.354726655 +0000 UTC m=+3960.828871009" watchObservedRunningTime="2026-03-08 01:28:59.368696862 +0000 UTC m=+3960.842841266" Mar 08 01:29:11 crc kubenswrapper[4762]: I0308 01:29:11.264014 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:29:11 crc kubenswrapper[4762]: E0308 01:29:11.264722 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:29:24 crc kubenswrapper[4762]: I0308 01:29:24.263435 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:29:24 crc kubenswrapper[4762]: E0308 01:29:24.264203 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:29:38 crc kubenswrapper[4762]: I0308 01:29:38.263549 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:29:38 crc kubenswrapper[4762]: E0308 01:29:38.264392 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:29:52 crc kubenswrapper[4762]: I0308 01:29:52.262927 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:29:52 crc kubenswrapper[4762]: E0308 01:29:52.263656 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.183205 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548890-dxpl4"] Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.185926 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.187779 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.188062 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.189449 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.219043 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl"] Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.221714 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.227125 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.227273 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.233735 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548890-dxpl4"] Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.253826 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl"] Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.257504 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.257586 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.258101 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvnst\" (UniqueName: \"kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst\") pod \"auto-csr-approver-29548890-dxpl4\" (UID: \"9c43864c-181b-492a-84db-593b684686e9\") " pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.258203 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbcn\" (UniqueName: \"kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.360084 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvnst\" (UniqueName: \"kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst\") pod \"auto-csr-approver-29548890-dxpl4\" (UID: \"9c43864c-181b-492a-84db-593b684686e9\") " pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.360694 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pbcn\" (UniqueName: \"kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.360969 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.361104 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.363468 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.370885 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.379343 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pbcn\" (UniqueName: \"kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn\") pod \"collect-profiles-29548890-v7nzl\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.386614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvnst\" (UniqueName: \"kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst\") pod \"auto-csr-approver-29548890-dxpl4\" (UID: \"9c43864c-181b-492a-84db-593b684686e9\") " pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.519740 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:00 crc kubenswrapper[4762]: I0308 01:30:00.551234 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:01 crc kubenswrapper[4762]: I0308 01:30:01.068487 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548890-dxpl4"] Mar 08 01:30:01 crc kubenswrapper[4762]: I0308 01:30:01.136677 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl"] Mar 08 01:30:01 crc kubenswrapper[4762]: W0308 01:30:01.142053 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2a2ce822_2dad_4840_ae9a_886c62dd392f.slice/crio-39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd WatchSource:0}: Error finding container 39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd: Status 404 returned error can't find the container with id 39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd Mar 08 01:30:02 crc kubenswrapper[4762]: I0308 01:30:02.079333 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" event={"ID":"9c43864c-181b-492a-84db-593b684686e9","Type":"ContainerStarted","Data":"a3cd4f47c8e64eaf1528a85993b1deae46e92e0ef74d4630b0fc5b1302da4049"} Mar 08 01:30:02 crc kubenswrapper[4762]: I0308 01:30:02.081910 4762 generic.go:334] "Generic (PLEG): container finished" podID="2a2ce822-2dad-4840-ae9a-886c62dd392f" containerID="cedf3a90c66aaa613b61faf01cfbaefcce797ca73c0d61488f10301693ea2f4e" exitCode=0 Mar 08 01:30:02 crc kubenswrapper[4762]: I0308 01:30:02.082084 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" event={"ID":"2a2ce822-2dad-4840-ae9a-886c62dd392f","Type":"ContainerDied","Data":"cedf3a90c66aaa613b61faf01cfbaefcce797ca73c0d61488f10301693ea2f4e"} Mar 08 01:30:02 crc kubenswrapper[4762]: I0308 01:30:02.082167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" event={"ID":"2a2ce822-2dad-4840-ae9a-886c62dd392f","Type":"ContainerStarted","Data":"39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd"} Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.095034 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" event={"ID":"9c43864c-181b-492a-84db-593b684686e9","Type":"ContainerStarted","Data":"da1391907ccec4d1aa90ef666aca5c27e3ff8a63ce28182d7d9c4d5dbeaf350d"} Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.126058 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" podStartSLOduration=1.581057367 podStartE2EDuration="3.126030885s" podCreationTimestamp="2026-03-08 01:30:00 +0000 UTC" firstStartedPulling="2026-03-08 01:30:01.095981353 +0000 UTC m=+4022.570125697" lastFinishedPulling="2026-03-08 01:30:02.640954871 +0000 UTC m=+4024.115099215" observedRunningTime="2026-03-08 01:30:03.113526094 +0000 UTC m=+4024.587670458" watchObservedRunningTime="2026-03-08 01:30:03.126030885 +0000 UTC m=+4024.600175269" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.486095 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.536925 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume\") pod \"2a2ce822-2dad-4840-ae9a-886c62dd392f\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.536999 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pbcn\" (UniqueName: \"kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn\") pod \"2a2ce822-2dad-4840-ae9a-886c62dd392f\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.537104 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume\") pod \"2a2ce822-2dad-4840-ae9a-886c62dd392f\" (UID: \"2a2ce822-2dad-4840-ae9a-886c62dd392f\") " Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.537564 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume" (OuterVolumeSpecName: "config-volume") pod "2a2ce822-2dad-4840-ae9a-886c62dd392f" (UID: "2a2ce822-2dad-4840-ae9a-886c62dd392f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.538017 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a2ce822-2dad-4840-ae9a-886c62dd392f-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.543898 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2a2ce822-2dad-4840-ae9a-886c62dd392f" (UID: "2a2ce822-2dad-4840-ae9a-886c62dd392f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.545611 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn" (OuterVolumeSpecName: "kube-api-access-8pbcn") pod "2a2ce822-2dad-4840-ae9a-886c62dd392f" (UID: "2a2ce822-2dad-4840-ae9a-886c62dd392f"). InnerVolumeSpecName "kube-api-access-8pbcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.640225 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pbcn\" (UniqueName: \"kubernetes.io/projected/2a2ce822-2dad-4840-ae9a-886c62dd392f-kube-api-access-8pbcn\") on node \"crc\" DevicePath \"\"" Mar 08 01:30:03 crc kubenswrapper[4762]: I0308 01:30:03.640255 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2a2ce822-2dad-4840-ae9a-886c62dd392f-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.109546 4762 generic.go:334] "Generic (PLEG): container finished" podID="9c43864c-181b-492a-84db-593b684686e9" containerID="da1391907ccec4d1aa90ef666aca5c27e3ff8a63ce28182d7d9c4d5dbeaf350d" exitCode=0 Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.109653 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" event={"ID":"9c43864c-181b-492a-84db-593b684686e9","Type":"ContainerDied","Data":"da1391907ccec4d1aa90ef666aca5c27e3ff8a63ce28182d7d9c4d5dbeaf350d"} Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.112272 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.112276 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl" event={"ID":"2a2ce822-2dad-4840-ae9a-886c62dd392f","Type":"ContainerDied","Data":"39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd"} Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.112314 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39cbd092a7df0ee639665a503dadbcc8e4658e2c8fa0b600f816fa71fec91ecd" Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.565952 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k"] Mar 08 01:30:04 crc kubenswrapper[4762]: I0308 01:30:04.578541 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548845-2m84k"] Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.264442 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:30:05 crc kubenswrapper[4762]: E0308 01:30:05.264893 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.285407 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15be8fbe-420e-43f5-9797-eba8d83627c5" path="/var/lib/kubelet/pods/15be8fbe-420e-43f5-9797-eba8d83627c5/volumes" Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.574879 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.692199 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvnst\" (UniqueName: \"kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst\") pod \"9c43864c-181b-492a-84db-593b684686e9\" (UID: \"9c43864c-181b-492a-84db-593b684686e9\") " Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.728271 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst" (OuterVolumeSpecName: "kube-api-access-fvnst") pod "9c43864c-181b-492a-84db-593b684686e9" (UID: "9c43864c-181b-492a-84db-593b684686e9"). InnerVolumeSpecName "kube-api-access-fvnst". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:30:05 crc kubenswrapper[4762]: I0308 01:30:05.798818 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvnst\" (UniqueName: \"kubernetes.io/projected/9c43864c-181b-492a-84db-593b684686e9-kube-api-access-fvnst\") on node \"crc\" DevicePath \"\"" Mar 08 01:30:06 crc kubenswrapper[4762]: I0308 01:30:06.138956 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" event={"ID":"9c43864c-181b-492a-84db-593b684686e9","Type":"ContainerDied","Data":"a3cd4f47c8e64eaf1528a85993b1deae46e92e0ef74d4630b0fc5b1302da4049"} Mar 08 01:30:06 crc kubenswrapper[4762]: I0308 01:30:06.139014 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3cd4f47c8e64eaf1528a85993b1deae46e92e0ef74d4630b0fc5b1302da4049" Mar 08 01:30:06 crc kubenswrapper[4762]: I0308 01:30:06.139349 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548890-dxpl4" Mar 08 01:30:06 crc kubenswrapper[4762]: I0308 01:30:06.198950 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548884-5289h"] Mar 08 01:30:06 crc kubenswrapper[4762]: I0308 01:30:06.208934 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548884-5289h"] Mar 08 01:30:07 crc kubenswrapper[4762]: I0308 01:30:07.282034 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5f9ac4-2a57-4024-a546-3de917d6528b" path="/var/lib/kubelet/pods/8f5f9ac4-2a57-4024-a546-3de917d6528b/volumes" Mar 08 01:30:17 crc kubenswrapper[4762]: I0308 01:30:17.264939 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:30:17 crc kubenswrapper[4762]: E0308 01:30:17.266472 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:30:28 crc kubenswrapper[4762]: I0308 01:30:28.264306 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:30:28 crc kubenswrapper[4762]: E0308 01:30:28.265238 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:30:32 crc kubenswrapper[4762]: I0308 01:30:32.409994 4762 scope.go:117] "RemoveContainer" containerID="b9720105288e425e1c661fbea9fbcb99f3198dba71066d187a11bca95c80bea7" Mar 08 01:30:32 crc kubenswrapper[4762]: I0308 01:30:32.478755 4762 scope.go:117] "RemoveContainer" containerID="682d0c9adc98928d259e839f9442c06d4a66d728497ef16a218fc2ba526b2dc7" Mar 08 01:30:43 crc kubenswrapper[4762]: I0308 01:30:43.264221 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:30:43 crc kubenswrapper[4762]: E0308 01:30:43.265099 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:30:58 crc kubenswrapper[4762]: I0308 01:30:58.263879 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:30:58 crc kubenswrapper[4762]: E0308 01:30:58.264972 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:31:10 crc kubenswrapper[4762]: I0308 01:31:10.264966 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:31:10 crc kubenswrapper[4762]: E0308 01:31:10.266328 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:31:23 crc kubenswrapper[4762]: I0308 01:31:23.265921 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:31:23 crc kubenswrapper[4762]: E0308 01:31:23.266994 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.868951 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:28 crc kubenswrapper[4762]: E0308 01:31:28.870716 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a2ce822-2dad-4840-ae9a-886c62dd392f" containerName="collect-profiles" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.870748 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a2ce822-2dad-4840-ae9a-886c62dd392f" containerName="collect-profiles" Mar 08 01:31:28 crc kubenswrapper[4762]: E0308 01:31:28.870850 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c43864c-181b-492a-84db-593b684686e9" containerName="oc" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.870870 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c43864c-181b-492a-84db-593b684686e9" containerName="oc" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.871499 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a2ce822-2dad-4840-ae9a-886c62dd392f" containerName="collect-profiles" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.871566 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c43864c-181b-492a-84db-593b684686e9" containerName="oc" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.875502 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:28 crc kubenswrapper[4762]: I0308 01:31:28.894150 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.000475 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.000658 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.000960 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnt79\" (UniqueName: \"kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.103707 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnt79\" (UniqueName: \"kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.103982 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.104045 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.104599 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.104639 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.123974 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnt79\" (UniqueName: \"kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79\") pod \"community-operators-c8rrg\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.223898 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:29 crc kubenswrapper[4762]: W0308 01:31:29.775280 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9f0cf8b_f5f0_45d3_a3a8_b33849700479.slice/crio-eb4352430b9b52ee5cc7ce1619023d3a1dac51de1527b607fd73218437c37b75 WatchSource:0}: Error finding container eb4352430b9b52ee5cc7ce1619023d3a1dac51de1527b607fd73218437c37b75: Status 404 returned error can't find the container with id eb4352430b9b52ee5cc7ce1619023d3a1dac51de1527b607fd73218437c37b75 Mar 08 01:31:29 crc kubenswrapper[4762]: I0308 01:31:29.775325 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:30 crc kubenswrapper[4762]: I0308 01:31:30.671684 4762 generic.go:334] "Generic (PLEG): container finished" podID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerID="0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d" exitCode=0 Mar 08 01:31:30 crc kubenswrapper[4762]: I0308 01:31:30.671905 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerDied","Data":"0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d"} Mar 08 01:31:30 crc kubenswrapper[4762]: I0308 01:31:30.673246 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerStarted","Data":"eb4352430b9b52ee5cc7ce1619023d3a1dac51de1527b607fd73218437c37b75"} Mar 08 01:31:31 crc kubenswrapper[4762]: I0308 01:31:31.693692 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerStarted","Data":"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116"} Mar 08 01:31:33 crc kubenswrapper[4762]: I0308 01:31:33.726368 4762 generic.go:334] "Generic (PLEG): container finished" podID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerID="a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116" exitCode=0 Mar 08 01:31:33 crc kubenswrapper[4762]: I0308 01:31:33.726510 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerDied","Data":"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116"} Mar 08 01:31:34 crc kubenswrapper[4762]: I0308 01:31:34.773809 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerStarted","Data":"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276"} Mar 08 01:31:34 crc kubenswrapper[4762]: I0308 01:31:34.807876 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8rrg" podStartSLOduration=3.292988942 podStartE2EDuration="6.807846511s" podCreationTimestamp="2026-03-08 01:31:28 +0000 UTC" firstStartedPulling="2026-03-08 01:31:30.674713006 +0000 UTC m=+4112.148857390" lastFinishedPulling="2026-03-08 01:31:34.189570605 +0000 UTC m=+4115.663714959" observedRunningTime="2026-03-08 01:31:34.797646027 +0000 UTC m=+4116.271790371" watchObservedRunningTime="2026-03-08 01:31:34.807846511 +0000 UTC m=+4116.281990895" Mar 08 01:31:38 crc kubenswrapper[4762]: I0308 01:31:38.263592 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:31:38 crc kubenswrapper[4762]: E0308 01:31:38.265237 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:31:39 crc kubenswrapper[4762]: I0308 01:31:39.228698 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:39 crc kubenswrapper[4762]: I0308 01:31:39.229151 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:39 crc kubenswrapper[4762]: I0308 01:31:39.308866 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:39 crc kubenswrapper[4762]: I0308 01:31:39.902981 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:39 crc kubenswrapper[4762]: I0308 01:31:39.988752 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:41 crc kubenswrapper[4762]: I0308 01:31:41.857362 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8rrg" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="registry-server" containerID="cri-o://a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276" gracePeriod=2 Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.459444 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.657877 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content\") pod \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.658198 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnt79\" (UniqueName: \"kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79\") pod \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.658295 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities\") pod \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\" (UID: \"c9f0cf8b-f5f0-45d3-a3a8-b33849700479\") " Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.662094 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities" (OuterVolumeSpecName: "utilities") pod "c9f0cf8b-f5f0-45d3-a3a8-b33849700479" (UID: "c9f0cf8b-f5f0-45d3-a3a8-b33849700479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.677749 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79" (OuterVolumeSpecName: "kube-api-access-qnt79") pod "c9f0cf8b-f5f0-45d3-a3a8-b33849700479" (UID: "c9f0cf8b-f5f0-45d3-a3a8-b33849700479"). InnerVolumeSpecName "kube-api-access-qnt79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.750985 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9f0cf8b-f5f0-45d3-a3a8-b33849700479" (UID: "c9f0cf8b-f5f0-45d3-a3a8-b33849700479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.762268 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.762301 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnt79\" (UniqueName: \"kubernetes.io/projected/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-kube-api-access-qnt79\") on node \"crc\" DevicePath \"\"" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.762342 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9f0cf8b-f5f0-45d3-a3a8-b33849700479-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.874520 4762 generic.go:334] "Generic (PLEG): container finished" podID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerID="a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276" exitCode=0 Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.874586 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerDied","Data":"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276"} Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.874614 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8rrg" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.874663 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8rrg" event={"ID":"c9f0cf8b-f5f0-45d3-a3a8-b33849700479","Type":"ContainerDied","Data":"eb4352430b9b52ee5cc7ce1619023d3a1dac51de1527b607fd73218437c37b75"} Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.874696 4762 scope.go:117] "RemoveContainer" containerID="a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.921783 4762 scope.go:117] "RemoveContainer" containerID="a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.938811 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.953542 4762 scope.go:117] "RemoveContainer" containerID="0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d" Mar 08 01:31:42 crc kubenswrapper[4762]: I0308 01:31:42.959075 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8rrg"] Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.014446 4762 scope.go:117] "RemoveContainer" containerID="a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276" Mar 08 01:31:43 crc kubenswrapper[4762]: E0308 01:31:43.014882 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276\": container with ID starting with a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276 not found: ID does not exist" containerID="a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.014935 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276"} err="failed to get container status \"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276\": rpc error: code = NotFound desc = could not find container \"a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276\": container with ID starting with a6931d2beccbf3d4558662527f214503d7bd73c1a4ffe42dcad1109798d07276 not found: ID does not exist" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.014967 4762 scope.go:117] "RemoveContainer" containerID="a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116" Mar 08 01:31:43 crc kubenswrapper[4762]: E0308 01:31:43.015533 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116\": container with ID starting with a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116 not found: ID does not exist" containerID="a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.015593 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116"} err="failed to get container status \"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116\": rpc error: code = NotFound desc = could not find container \"a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116\": container with ID starting with a1550eed8b952d518301a5078317280e4d3a140b2f5170e15b0abc53acb74116 not found: ID does not exist" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.015635 4762 scope.go:117] "RemoveContainer" containerID="0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d" Mar 08 01:31:43 crc kubenswrapper[4762]: E0308 01:31:43.016022 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d\": container with ID starting with 0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d not found: ID does not exist" containerID="0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.016073 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d"} err="failed to get container status \"0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d\": rpc error: code = NotFound desc = could not find container \"0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d\": container with ID starting with 0020b89645f957a5a81a98716d5babbe97fd08ee7bd84d4e9e00cb6c003ab75d not found: ID does not exist" Mar 08 01:31:43 crc kubenswrapper[4762]: I0308 01:31:43.279737 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" path="/var/lib/kubelet/pods/c9f0cf8b-f5f0-45d3-a3a8-b33849700479/volumes" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.972590 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:31:45 crc kubenswrapper[4762]: E0308 01:31:45.973597 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="registry-server" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.973615 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="registry-server" Mar 08 01:31:45 crc kubenswrapper[4762]: E0308 01:31:45.973643 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="extract-utilities" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.973652 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="extract-utilities" Mar 08 01:31:45 crc kubenswrapper[4762]: E0308 01:31:45.973673 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="extract-content" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.973681 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="extract-content" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.973947 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9f0cf8b-f5f0-45d3-a3a8-b33849700479" containerName="registry-server" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.975957 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:45 crc kubenswrapper[4762]: I0308 01:31:45.984828 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.137899 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.137993 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfdm\" (UniqueName: \"kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.138217 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.242430 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.242542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrfdm\" (UniqueName: \"kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.242644 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.243207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.243729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.279211 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrfdm\" (UniqueName: \"kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm\") pod \"redhat-operators-lkg99\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.309382 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.846064 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:31:46 crc kubenswrapper[4762]: I0308 01:31:46.927623 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerStarted","Data":"f3b9477859622c7e821214472543d874f6b6027d7657e689bcc0d8c00c7ae77e"} Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.564841 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.567131 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.574227 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.574291 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.574319 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk7fg\" (UniqueName: \"kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.596696 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.677050 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.677121 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.677147 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk7fg\" (UniqueName: \"kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.677852 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.678154 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.702849 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk7fg\" (UniqueName: \"kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg\") pod \"certified-operators-lkv25\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.892481 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.944271 4762 generic.go:334] "Generic (PLEG): container finished" podID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerID="7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd" exitCode=0 Mar 08 01:31:47 crc kubenswrapper[4762]: I0308 01:31:47.944335 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerDied","Data":"7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd"} Mar 08 01:31:48 crc kubenswrapper[4762]: I0308 01:31:48.471386 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:31:48 crc kubenswrapper[4762]: W0308 01:31:48.817896 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf2aeb82_3ce7_41ee_8a4f_c656c98ada05.slice/crio-c7c6c0b1abeeb2bc99bee5f81fda4e4e33e179c6f8f521855d9e39e522f2c7fe WatchSource:0}: Error finding container c7c6c0b1abeeb2bc99bee5f81fda4e4e33e179c6f8f521855d9e39e522f2c7fe: Status 404 returned error can't find the container with id c7c6c0b1abeeb2bc99bee5f81fda4e4e33e179c6f8f521855d9e39e522f2c7fe Mar 08 01:31:48 crc kubenswrapper[4762]: I0308 01:31:48.955980 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerStarted","Data":"c7c6c0b1abeeb2bc99bee5f81fda4e4e33e179c6f8f521855d9e39e522f2c7fe"} Mar 08 01:31:49 crc kubenswrapper[4762]: I0308 01:31:49.971109 4762 generic.go:334] "Generic (PLEG): container finished" podID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerID="46ea62cf3c6a6d662c1d9e27742ed158828e67145cba90aa0d4d1c2094a4e22f" exitCode=0 Mar 08 01:31:49 crc kubenswrapper[4762]: I0308 01:31:49.971159 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerDied","Data":"46ea62cf3c6a6d662c1d9e27742ed158828e67145cba90aa0d4d1c2094a4e22f"} Mar 08 01:31:49 crc kubenswrapper[4762]: I0308 01:31:49.973327 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerStarted","Data":"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6"} Mar 08 01:31:51 crc kubenswrapper[4762]: I0308 01:31:51.264501 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:31:51 crc kubenswrapper[4762]: E0308 01:31:51.264860 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:31:52 crc kubenswrapper[4762]: I0308 01:31:52.002998 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerStarted","Data":"3e99b34880050255b4fb9c981c756667c8896e2b04f574091475c61f188229ad"} Mar 08 01:31:53 crc kubenswrapper[4762]: I0308 01:31:53.019296 4762 generic.go:334] "Generic (PLEG): container finished" podID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerID="b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6" exitCode=0 Mar 08 01:31:53 crc kubenswrapper[4762]: I0308 01:31:53.019361 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerDied","Data":"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6"} Mar 08 01:31:54 crc kubenswrapper[4762]: I0308 01:31:54.035656 4762 generic.go:334] "Generic (PLEG): container finished" podID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerID="3e99b34880050255b4fb9c981c756667c8896e2b04f574091475c61f188229ad" exitCode=0 Mar 08 01:31:54 crc kubenswrapper[4762]: I0308 01:31:54.035811 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerDied","Data":"3e99b34880050255b4fb9c981c756667c8896e2b04f574091475c61f188229ad"} Mar 08 01:31:55 crc kubenswrapper[4762]: I0308 01:31:55.049102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerStarted","Data":"a1d821166481de4a1bc0f7001e1d1807d7eabecc8e52cff9c72efacf9abc83bb"} Mar 08 01:31:55 crc kubenswrapper[4762]: I0308 01:31:55.051822 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerStarted","Data":"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029"} Mar 08 01:31:55 crc kubenswrapper[4762]: I0308 01:31:55.073991 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkv25" podStartSLOduration=3.519035599 podStartE2EDuration="8.073975318s" podCreationTimestamp="2026-03-08 01:31:47 +0000 UTC" firstStartedPulling="2026-03-08 01:31:49.973434568 +0000 UTC m=+4131.447578962" lastFinishedPulling="2026-03-08 01:31:54.528374317 +0000 UTC m=+4136.002518681" observedRunningTime="2026-03-08 01:31:55.069895592 +0000 UTC m=+4136.544039946" watchObservedRunningTime="2026-03-08 01:31:55.073975318 +0000 UTC m=+4136.548119662" Mar 08 01:31:55 crc kubenswrapper[4762]: I0308 01:31:55.106272 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkg99" podStartSLOduration=4.348916669 podStartE2EDuration="10.10625459s" podCreationTimestamp="2026-03-08 01:31:45 +0000 UTC" firstStartedPulling="2026-03-08 01:31:47.946865286 +0000 UTC m=+4129.421009630" lastFinishedPulling="2026-03-08 01:31:53.704203177 +0000 UTC m=+4135.178347551" observedRunningTime="2026-03-08 01:31:55.098989847 +0000 UTC m=+4136.573134241" watchObservedRunningTime="2026-03-08 01:31:55.10625459 +0000 UTC m=+4136.580398934" Mar 08 01:31:56 crc kubenswrapper[4762]: I0308 01:31:56.309873 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:56 crc kubenswrapper[4762]: I0308 01:31:56.310190 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:31:57 crc kubenswrapper[4762]: I0308 01:31:57.372595 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkg99" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="registry-server" probeResult="failure" output=< Mar 08 01:31:57 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:31:57 crc kubenswrapper[4762]: > Mar 08 01:31:57 crc kubenswrapper[4762]: I0308 01:31:57.894244 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:57 crc kubenswrapper[4762]: I0308 01:31:57.894321 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:31:57 crc kubenswrapper[4762]: I0308 01:31:57.968387 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.153903 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548892-frtbn"] Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.156054 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.160265 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.160487 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.160658 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.179207 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548892-frtbn"] Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.259601 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh4zq\" (UniqueName: \"kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq\") pod \"auto-csr-approver-29548892-frtbn\" (UID: \"2c656452-593d-41b6-8781-1a7f1225ef8a\") " pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.381823 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh4zq\" (UniqueName: \"kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq\") pod \"auto-csr-approver-29548892-frtbn\" (UID: \"2c656452-593d-41b6-8781-1a7f1225ef8a\") " pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.400700 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh4zq\" (UniqueName: \"kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq\") pod \"auto-csr-approver-29548892-frtbn\" (UID: \"2c656452-593d-41b6-8781-1a7f1225ef8a\") " pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.488711 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:00 crc kubenswrapper[4762]: I0308 01:32:00.974662 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548892-frtbn"] Mar 08 01:32:01 crc kubenswrapper[4762]: I0308 01:32:01.132538 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548892-frtbn" event={"ID":"2c656452-593d-41b6-8781-1a7f1225ef8a","Type":"ContainerStarted","Data":"6bc87628f532db2e4f98bc280dd335853c248d0b57ce25a97cb70cc4d52e1168"} Mar 08 01:32:02 crc kubenswrapper[4762]: I0308 01:32:02.264307 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:32:02 crc kubenswrapper[4762]: E0308 01:32:02.264960 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:32:03 crc kubenswrapper[4762]: I0308 01:32:03.155532 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548892-frtbn" event={"ID":"2c656452-593d-41b6-8781-1a7f1225ef8a","Type":"ContainerStarted","Data":"559e9358545cc3e78994f5faa2f21ebd5c95312725dce55bb4494deb612fd417"} Mar 08 01:32:03 crc kubenswrapper[4762]: I0308 01:32:03.177863 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548892-frtbn" podStartSLOduration=2.172963554 podStartE2EDuration="3.177845592s" podCreationTimestamp="2026-03-08 01:32:00 +0000 UTC" firstStartedPulling="2026-03-08 01:32:00.986820792 +0000 UTC m=+4142.460965176" lastFinishedPulling="2026-03-08 01:32:01.99170283 +0000 UTC m=+4143.465847214" observedRunningTime="2026-03-08 01:32:03.173581042 +0000 UTC m=+4144.647725396" watchObservedRunningTime="2026-03-08 01:32:03.177845592 +0000 UTC m=+4144.651989936" Mar 08 01:32:04 crc kubenswrapper[4762]: I0308 01:32:04.173935 4762 generic.go:334] "Generic (PLEG): container finished" podID="2c656452-593d-41b6-8781-1a7f1225ef8a" containerID="559e9358545cc3e78994f5faa2f21ebd5c95312725dce55bb4494deb612fd417" exitCode=0 Mar 08 01:32:04 crc kubenswrapper[4762]: I0308 01:32:04.174025 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548892-frtbn" event={"ID":"2c656452-593d-41b6-8781-1a7f1225ef8a","Type":"ContainerDied","Data":"559e9358545cc3e78994f5faa2f21ebd5c95312725dce55bb4494deb612fd417"} Mar 08 01:32:05 crc kubenswrapper[4762]: I0308 01:32:05.760241 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:05 crc kubenswrapper[4762]: I0308 01:32:05.857332 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh4zq\" (UniqueName: \"kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq\") pod \"2c656452-593d-41b6-8781-1a7f1225ef8a\" (UID: \"2c656452-593d-41b6-8781-1a7f1225ef8a\") " Mar 08 01:32:05 crc kubenswrapper[4762]: I0308 01:32:05.899068 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq" (OuterVolumeSpecName: "kube-api-access-lh4zq") pod "2c656452-593d-41b6-8781-1a7f1225ef8a" (UID: "2c656452-593d-41b6-8781-1a7f1225ef8a"). InnerVolumeSpecName "kube-api-access-lh4zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:32:05 crc kubenswrapper[4762]: I0308 01:32:05.960190 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh4zq\" (UniqueName: \"kubernetes.io/projected/2c656452-593d-41b6-8781-1a7f1225ef8a-kube-api-access-lh4zq\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.215367 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548892-frtbn" event={"ID":"2c656452-593d-41b6-8781-1a7f1225ef8a","Type":"ContainerDied","Data":"6bc87628f532db2e4f98bc280dd335853c248d0b57ce25a97cb70cc4d52e1168"} Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.215599 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6bc87628f532db2e4f98bc280dd335853c248d0b57ce25a97cb70cc4d52e1168" Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.215465 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548892-frtbn" Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.276939 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548886-zxl2p"] Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.289215 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548886-zxl2p"] Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.384668 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.460587 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:32:06 crc kubenswrapper[4762]: I0308 01:32:06.639756 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:32:07 crc kubenswrapper[4762]: I0308 01:32:07.287815 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78d4dd3-c2b7-43bc-b431-250d47800781" path="/var/lib/kubelet/pods/d78d4dd3-c2b7-43bc-b431-250d47800781/volumes" Mar 08 01:32:07 crc kubenswrapper[4762]: I0308 01:32:07.993060 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.243064 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkg99" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="registry-server" containerID="cri-o://7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029" gracePeriod=2 Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.889075 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.945971 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities\") pod \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.946041 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content\") pod \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.946160 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrfdm\" (UniqueName: \"kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm\") pod \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\" (UID: \"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b\") " Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.946525 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities" (OuterVolumeSpecName: "utilities") pod "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" (UID: "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.946699 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:08 crc kubenswrapper[4762]: I0308 01:32:08.953680 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm" (OuterVolumeSpecName: "kube-api-access-wrfdm") pod "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" (UID: "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b"). InnerVolumeSpecName "kube-api-access-wrfdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.032792 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.033469 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lkv25" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="registry-server" containerID="cri-o://a1d821166481de4a1bc0f7001e1d1807d7eabecc8e52cff9c72efacf9abc83bb" gracePeriod=2 Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.050909 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrfdm\" (UniqueName: \"kubernetes.io/projected/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-kube-api-access-wrfdm\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.098160 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" (UID: "e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.153273 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.255283 4762 generic.go:334] "Generic (PLEG): container finished" podID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerID="a1d821166481de4a1bc0f7001e1d1807d7eabecc8e52cff9c72efacf9abc83bb" exitCode=0 Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.255355 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerDied","Data":"a1d821166481de4a1bc0f7001e1d1807d7eabecc8e52cff9c72efacf9abc83bb"} Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.264348 4762 generic.go:334] "Generic (PLEG): container finished" podID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerID="7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029" exitCode=0 Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.271098 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkg99" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.275043 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerDied","Data":"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029"} Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.275077 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkg99" event={"ID":"e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b","Type":"ContainerDied","Data":"f3b9477859622c7e821214472543d874f6b6027d7657e689bcc0d8c00c7ae77e"} Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.275096 4762 scope.go:117] "RemoveContainer" containerID="7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.319999 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.326757 4762 scope.go:117] "RemoveContainer" containerID="b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.330964 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkg99"] Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.362159 4762 scope.go:117] "RemoveContainer" containerID="7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.387382 4762 scope.go:117] "RemoveContainer" containerID="7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029" Mar 08 01:32:09 crc kubenswrapper[4762]: E0308 01:32:09.387868 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029\": container with ID starting with 7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029 not found: ID does not exist" containerID="7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.387916 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029"} err="failed to get container status \"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029\": rpc error: code = NotFound desc = could not find container \"7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029\": container with ID starting with 7ad300a6a49e7539e12510ae26878bc42ec6f2fdfaaee4067329589721a3b029 not found: ID does not exist" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.387946 4762 scope.go:117] "RemoveContainer" containerID="b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6" Mar 08 01:32:09 crc kubenswrapper[4762]: E0308 01:32:09.388335 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6\": container with ID starting with b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6 not found: ID does not exist" containerID="b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.388383 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6"} err="failed to get container status \"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6\": rpc error: code = NotFound desc = could not find container \"b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6\": container with ID starting with b48a6457c13d8788678d6aaccbe27fcef22df1199f13bee94fca778f0b8443b6 not found: ID does not exist" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.388406 4762 scope.go:117] "RemoveContainer" containerID="7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd" Mar 08 01:32:09 crc kubenswrapper[4762]: E0308 01:32:09.388745 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd\": container with ID starting with 7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd not found: ID does not exist" containerID="7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.388814 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd"} err="failed to get container status \"7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd\": rpc error: code = NotFound desc = could not find container \"7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd\": container with ID starting with 7d2be41c72d9e84e9cfba03e0937d6af1033a64e952753f66c868efde919a5dd not found: ID does not exist" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.589387 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.669979 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities\") pod \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.670566 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content\") pod \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.670984 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities" (OuterVolumeSpecName: "utilities") pod "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" (UID: "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.671193 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk7fg\" (UniqueName: \"kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg\") pod \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\" (UID: \"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05\") " Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.672154 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.676539 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg" (OuterVolumeSpecName: "kube-api-access-wk7fg") pod "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" (UID: "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05"). InnerVolumeSpecName "kube-api-access-wk7fg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.720688 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" (UID: "bf2aeb82-3ce7-41ee-8a4f-c656c98ada05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.774373 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk7fg\" (UniqueName: \"kubernetes.io/projected/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-kube-api-access-wk7fg\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:09 crc kubenswrapper[4762]: I0308 01:32:09.774409 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.296041 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkv25" event={"ID":"bf2aeb82-3ce7-41ee-8a4f-c656c98ada05","Type":"ContainerDied","Data":"c7c6c0b1abeeb2bc99bee5f81fda4e4e33e179c6f8f521855d9e39e522f2c7fe"} Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.296118 4762 scope.go:117] "RemoveContainer" containerID="a1d821166481de4a1bc0f7001e1d1807d7eabecc8e52cff9c72efacf9abc83bb" Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.296308 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkv25" Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.336311 4762 scope.go:117] "RemoveContainer" containerID="3e99b34880050255b4fb9c981c756667c8896e2b04f574091475c61f188229ad" Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.351689 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.361232 4762 scope.go:117] "RemoveContainer" containerID="46ea62cf3c6a6d662c1d9e27742ed158828e67145cba90aa0d4d1c2094a4e22f" Mar 08 01:32:10 crc kubenswrapper[4762]: I0308 01:32:10.362910 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lkv25"] Mar 08 01:32:11 crc kubenswrapper[4762]: I0308 01:32:11.282853 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" path="/var/lib/kubelet/pods/bf2aeb82-3ce7-41ee-8a4f-c656c98ada05/volumes" Mar 08 01:32:11 crc kubenswrapper[4762]: I0308 01:32:11.284699 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" path="/var/lib/kubelet/pods/e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b/volumes" Mar 08 01:32:14 crc kubenswrapper[4762]: I0308 01:32:14.264457 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:32:14 crc kubenswrapper[4762]: E0308 01:32:14.265590 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:32:16 crc kubenswrapper[4762]: I0308 01:32:16.406414 4762 generic.go:334] "Generic (PLEG): container finished" podID="01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" containerID="88e5bbc2699cfbe86af7b368bb3d08b18e65011b381ca60ea852b5745ada6abf" exitCode=0 Mar 08 01:32:16 crc kubenswrapper[4762]: I0308 01:32:16.406457 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" event={"ID":"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc","Type":"ContainerDied","Data":"88e5bbc2699cfbe86af7b368bb3d08b18e65011b381ca60ea852b5745ada6abf"} Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.009818 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.095803 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.095939 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096051 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096149 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096181 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096200 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096229 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw4r4\" (UniqueName: \"kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.096343 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1\") pod \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\" (UID: \"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc\") " Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.102835 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.103903 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4" (OuterVolumeSpecName: "kube-api-access-tw4r4") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "kube-api-access-tw4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.105034 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph" (OuterVolumeSpecName: "ceph") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.135005 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.135856 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.143863 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.160704 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.165571 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory" (OuterVolumeSpecName: "inventory") pod "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" (UID: "01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198747 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198808 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198831 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198851 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198871 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw4r4\" (UniqueName: \"kubernetes.io/projected/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-kube-api-access-tw4r4\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198889 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198906 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.198924 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.435800 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" event={"ID":"01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc","Type":"ContainerDied","Data":"c362f601b559d582317966d986adc4f197cd026675205ea90049ef48bae1035a"} Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.435852 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c362f601b559d582317966d986adc4f197cd026675205ea90049ef48bae1035a" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.435872 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576129 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s"] Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576564 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576588 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576603 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="extract-utilities" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576611 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="extract-utilities" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576628 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576637 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576651 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576658 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576678 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="extract-content" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576685 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="extract-content" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576701 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="extract-utilities" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576709 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="extract-utilities" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576735 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="extract-content" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576744 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="extract-content" Mar 08 01:32:18 crc kubenswrapper[4762]: E0308 01:32:18.576771 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c656452-593d-41b6-8781-1a7f1225ef8a" containerName="oc" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.576779 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c656452-593d-41b6-8781-1a7f1225ef8a" containerName="oc" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.577007 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6d6074a-e2ed-4268-a2ff-c9d6feca9a6b" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.577029 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2aeb82-3ce7-41ee-8a4f-c656c98ada05" containerName="registry-server" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.577038 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c656452-593d-41b6-8781-1a7f1225ef8a" containerName="oc" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.577053 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.577842 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.581802 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.582051 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.582097 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.582465 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.582578 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.584581 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.595492 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s"] Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.612734 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613236 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqfr\" (UniqueName: \"kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613467 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613535 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613605 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613663 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613802 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.613924 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.715881 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.716075 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqfr\" (UniqueName: \"kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.716132 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.716189 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.717108 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.717156 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.717196 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.717235 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.720835 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.721391 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.721423 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.722473 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.722716 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.723453 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.729398 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.737530 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqfr\" (UniqueName: \"kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:18 crc kubenswrapper[4762]: I0308 01:32:18.913287 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:32:19 crc kubenswrapper[4762]: I0308 01:32:19.482141 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s"] Mar 08 01:32:20 crc kubenswrapper[4762]: I0308 01:32:20.462069 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" event={"ID":"677ad41b-e2cb-4329-a014-3427d8ec936b","Type":"ContainerStarted","Data":"4689a314afc8225bb405e8fe9501a03762fd624359e66f4e8bd216bbddc00cfb"} Mar 08 01:32:21 crc kubenswrapper[4762]: I0308 01:32:21.476737 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" event={"ID":"677ad41b-e2cb-4329-a014-3427d8ec936b","Type":"ContainerStarted","Data":"54489358a690ae146cb875061a3d0d24fb2d6e7aff5d27c20886e8540fd8ebbc"} Mar 08 01:32:21 crc kubenswrapper[4762]: I0308 01:32:21.509070 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" podStartSLOduration=3.012725609 podStartE2EDuration="3.509040565s" podCreationTimestamp="2026-03-08 01:32:18 +0000 UTC" firstStartedPulling="2026-03-08 01:32:20.406966218 +0000 UTC m=+4161.881110562" lastFinishedPulling="2026-03-08 01:32:20.903281174 +0000 UTC m=+4162.377425518" observedRunningTime="2026-03-08 01:32:21.499056488 +0000 UTC m=+4162.973200832" watchObservedRunningTime="2026-03-08 01:32:21.509040565 +0000 UTC m=+4162.983184939" Mar 08 01:32:26 crc kubenswrapper[4762]: I0308 01:32:26.263998 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:32:26 crc kubenswrapper[4762]: E0308 01:32:26.264714 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:32:32 crc kubenswrapper[4762]: I0308 01:32:32.616727 4762 scope.go:117] "RemoveContainer" containerID="b452f6792752eeba8b15844ec80e34f61b799077983c0db1e95ca8405c767c7b" Mar 08 01:32:37 crc kubenswrapper[4762]: I0308 01:32:37.263540 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:32:37 crc kubenswrapper[4762]: E0308 01:32:37.264710 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:32:49 crc kubenswrapper[4762]: I0308 01:32:49.277138 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:32:49 crc kubenswrapper[4762]: E0308 01:32:49.278907 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:33:00 crc kubenswrapper[4762]: I0308 01:33:00.264076 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:33:00 crc kubenswrapper[4762]: E0308 01:33:00.266081 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:33:13 crc kubenswrapper[4762]: I0308 01:33:13.263894 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:33:13 crc kubenswrapper[4762]: E0308 01:33:13.264891 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:33:24 crc kubenswrapper[4762]: I0308 01:33:24.264477 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:33:24 crc kubenswrapper[4762]: E0308 01:33:24.265429 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:33:35 crc kubenswrapper[4762]: I0308 01:33:35.264858 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:33:35 crc kubenswrapper[4762]: E0308 01:33:35.265985 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:33:48 crc kubenswrapper[4762]: I0308 01:33:48.263271 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:33:48 crc kubenswrapper[4762]: I0308 01:33:48.602555 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5"} Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.381883 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.386345 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.402804 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.504679 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.505050 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8ll\" (UniqueName: \"kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.505201 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.607365 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.607518 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.607559 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8ll\" (UniqueName: \"kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.608320 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.608588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.640720 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8ll\" (UniqueName: \"kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll\") pod \"redhat-marketplace-q8c28\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:52 crc kubenswrapper[4762]: I0308 01:33:52.751826 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:33:53 crc kubenswrapper[4762]: I0308 01:33:53.199418 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:33:53 crc kubenswrapper[4762]: W0308 01:33:53.200366 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod660c052f_1ed5_492d_8f20_01ac3807eefb.slice/crio-87d370b596e39ba292dc4c028c023fd188390bce7df755ec9ec088893e71e5e5 WatchSource:0}: Error finding container 87d370b596e39ba292dc4c028c023fd188390bce7df755ec9ec088893e71e5e5: Status 404 returned error can't find the container with id 87d370b596e39ba292dc4c028c023fd188390bce7df755ec9ec088893e71e5e5 Mar 08 01:33:53 crc kubenswrapper[4762]: I0308 01:33:53.657592 4762 generic.go:334] "Generic (PLEG): container finished" podID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerID="52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744" exitCode=0 Mar 08 01:33:53 crc kubenswrapper[4762]: I0308 01:33:53.657690 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerDied","Data":"52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744"} Mar 08 01:33:53 crc kubenswrapper[4762]: I0308 01:33:53.658183 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerStarted","Data":"87d370b596e39ba292dc4c028c023fd188390bce7df755ec9ec088893e71e5e5"} Mar 08 01:33:53 crc kubenswrapper[4762]: I0308 01:33:53.660633 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:33:54 crc kubenswrapper[4762]: I0308 01:33:54.676028 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerStarted","Data":"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f"} Mar 08 01:33:55 crc kubenswrapper[4762]: I0308 01:33:55.692076 4762 generic.go:334] "Generic (PLEG): container finished" podID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerID="5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f" exitCode=0 Mar 08 01:33:55 crc kubenswrapper[4762]: I0308 01:33:55.692237 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerDied","Data":"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f"} Mar 08 01:33:56 crc kubenswrapper[4762]: I0308 01:33:56.708103 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerStarted","Data":"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd"} Mar 08 01:33:56 crc kubenswrapper[4762]: I0308 01:33:56.750612 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8c28" podStartSLOduration=2.300416639 podStartE2EDuration="4.75059372s" podCreationTimestamp="2026-03-08 01:33:52 +0000 UTC" firstStartedPulling="2026-03-08 01:33:53.660208718 +0000 UTC m=+4255.134353102" lastFinishedPulling="2026-03-08 01:33:56.110385829 +0000 UTC m=+4257.584530183" observedRunningTime="2026-03-08 01:33:56.73302852 +0000 UTC m=+4258.207172904" watchObservedRunningTime="2026-03-08 01:33:56.75059372 +0000 UTC m=+4258.224738074" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.160716 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548894-gbfkv"] Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.164881 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.173359 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548894-gbfkv"] Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.189817 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.189907 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.190034 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.289038 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkd54\" (UniqueName: \"kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54\") pod \"auto-csr-approver-29548894-gbfkv\" (UID: \"19c6075f-11f7-41f5-a389-9695c2b37e66\") " pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.390846 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkd54\" (UniqueName: \"kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54\") pod \"auto-csr-approver-29548894-gbfkv\" (UID: \"19c6075f-11f7-41f5-a389-9695c2b37e66\") " pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.413109 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkd54\" (UniqueName: \"kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54\") pod \"auto-csr-approver-29548894-gbfkv\" (UID: \"19c6075f-11f7-41f5-a389-9695c2b37e66\") " pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.525184 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:00 crc kubenswrapper[4762]: I0308 01:34:00.980887 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548894-gbfkv"] Mar 08 01:34:01 crc kubenswrapper[4762]: I0308 01:34:01.760373 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" event={"ID":"19c6075f-11f7-41f5-a389-9695c2b37e66","Type":"ContainerStarted","Data":"4a21c03c0308848a3122955a9b0cda1515ffdbdb02d3d45441401a4d6826832b"} Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.752473 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.753092 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.772180 4762 generic.go:334] "Generic (PLEG): container finished" podID="19c6075f-11f7-41f5-a389-9695c2b37e66" containerID="b2f1dd7d8a2ffed021ad13e8bf2019ee2fe08c8b4c5a288f4c94d44826c8d8a7" exitCode=0 Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.772232 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" event={"ID":"19c6075f-11f7-41f5-a389-9695c2b37e66","Type":"ContainerDied","Data":"b2f1dd7d8a2ffed021ad13e8bf2019ee2fe08c8b4c5a288f4c94d44826c8d8a7"} Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.802026 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:02 crc kubenswrapper[4762]: I0308 01:34:02.858838 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:03 crc kubenswrapper[4762]: I0308 01:34:03.053339 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.260829 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.387697 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkd54\" (UniqueName: \"kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54\") pod \"19c6075f-11f7-41f5-a389-9695c2b37e66\" (UID: \"19c6075f-11f7-41f5-a389-9695c2b37e66\") " Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.407552 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54" (OuterVolumeSpecName: "kube-api-access-fkd54") pod "19c6075f-11f7-41f5-a389-9695c2b37e66" (UID: "19c6075f-11f7-41f5-a389-9695c2b37e66"). InnerVolumeSpecName "kube-api-access-fkd54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.490614 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkd54\" (UniqueName: \"kubernetes.io/projected/19c6075f-11f7-41f5-a389-9695c2b37e66-kube-api-access-fkd54\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.796596 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" event={"ID":"19c6075f-11f7-41f5-a389-9695c2b37e66","Type":"ContainerDied","Data":"4a21c03c0308848a3122955a9b0cda1515ffdbdb02d3d45441401a4d6826832b"} Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.796976 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a21c03c0308848a3122955a9b0cda1515ffdbdb02d3d45441401a4d6826832b" Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.796782 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8c28" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="registry-server" containerID="cri-o://d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd" gracePeriod=2 Mar 08 01:34:04 crc kubenswrapper[4762]: I0308 01:34:04.796666 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548894-gbfkv" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.343909 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548888-ptgtj"] Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.356571 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548888-ptgtj"] Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.363936 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.514314 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content\") pod \"660c052f-1ed5-492d-8f20-01ac3807eefb\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.514828 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t8ll\" (UniqueName: \"kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll\") pod \"660c052f-1ed5-492d-8f20-01ac3807eefb\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.515001 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities\") pod \"660c052f-1ed5-492d-8f20-01ac3807eefb\" (UID: \"660c052f-1ed5-492d-8f20-01ac3807eefb\") " Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.515674 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities" (OuterVolumeSpecName: "utilities") pod "660c052f-1ed5-492d-8f20-01ac3807eefb" (UID: "660c052f-1ed5-492d-8f20-01ac3807eefb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.525559 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll" (OuterVolumeSpecName: "kube-api-access-7t8ll") pod "660c052f-1ed5-492d-8f20-01ac3807eefb" (UID: "660c052f-1ed5-492d-8f20-01ac3807eefb"). InnerVolumeSpecName "kube-api-access-7t8ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.569573 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "660c052f-1ed5-492d-8f20-01ac3807eefb" (UID: "660c052f-1ed5-492d-8f20-01ac3807eefb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.618427 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.618467 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t8ll\" (UniqueName: \"kubernetes.io/projected/660c052f-1ed5-492d-8f20-01ac3807eefb-kube-api-access-7t8ll\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.618482 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/660c052f-1ed5-492d-8f20-01ac3807eefb-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.825310 4762 generic.go:334] "Generic (PLEG): container finished" podID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerID="d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd" exitCode=0 Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.825352 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8c28" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.825368 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerDied","Data":"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd"} Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.825405 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8c28" event={"ID":"660c052f-1ed5-492d-8f20-01ac3807eefb","Type":"ContainerDied","Data":"87d370b596e39ba292dc4c028c023fd188390bce7df755ec9ec088893e71e5e5"} Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.825431 4762 scope.go:117] "RemoveContainer" containerID="d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.865843 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.867175 4762 scope.go:117] "RemoveContainer" containerID="5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.875386 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8c28"] Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.920550 4762 scope.go:117] "RemoveContainer" containerID="52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.950196 4762 scope.go:117] "RemoveContainer" containerID="d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd" Mar 08 01:34:05 crc kubenswrapper[4762]: E0308 01:34:05.950784 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd\": container with ID starting with d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd not found: ID does not exist" containerID="d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.950832 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd"} err="failed to get container status \"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd\": rpc error: code = NotFound desc = could not find container \"d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd\": container with ID starting with d84f51ec7bc78b4bdae8244669ed0b06c5bff358b66181afb754826e7ed6e1dd not found: ID does not exist" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.950860 4762 scope.go:117] "RemoveContainer" containerID="5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f" Mar 08 01:34:05 crc kubenswrapper[4762]: E0308 01:34:05.951266 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f\": container with ID starting with 5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f not found: ID does not exist" containerID="5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.951312 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f"} err="failed to get container status \"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f\": rpc error: code = NotFound desc = could not find container \"5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f\": container with ID starting with 5a68f7dd5a847fc0e5412bf15855e2d855c4a67e973bf515f37df32ffb73ba8f not found: ID does not exist" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.951337 4762 scope.go:117] "RemoveContainer" containerID="52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744" Mar 08 01:34:05 crc kubenswrapper[4762]: E0308 01:34:05.951768 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744\": container with ID starting with 52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744 not found: ID does not exist" containerID="52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744" Mar 08 01:34:05 crc kubenswrapper[4762]: I0308 01:34:05.951831 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744"} err="failed to get container status \"52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744\": rpc error: code = NotFound desc = could not find container \"52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744\": container with ID starting with 52053ac55350452d17a565ada59a1b6a8bbf0396c37433b629c23397adfac744 not found: ID does not exist" Mar 08 01:34:07 crc kubenswrapper[4762]: I0308 01:34:07.289141 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" path="/var/lib/kubelet/pods/660c052f-1ed5-492d-8f20-01ac3807eefb/volumes" Mar 08 01:34:07 crc kubenswrapper[4762]: I0308 01:34:07.291433 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96aaeade-1629-4d49-8cbe-16d893443bbb" path="/var/lib/kubelet/pods/96aaeade-1629-4d49-8cbe-16d893443bbb/volumes" Mar 08 01:34:33 crc kubenswrapper[4762]: I0308 01:34:33.007288 4762 scope.go:117] "RemoveContainer" containerID="d5f1528e612983e9f39017a074eace85e01e565c27b83c27714b130adc05f948" Mar 08 01:34:51 crc kubenswrapper[4762]: I0308 01:34:51.418355 4762 generic.go:334] "Generic (PLEG): container finished" podID="677ad41b-e2cb-4329-a014-3427d8ec936b" containerID="54489358a690ae146cb875061a3d0d24fb2d6e7aff5d27c20886e8540fd8ebbc" exitCode=0 Mar 08 01:34:51 crc kubenswrapper[4762]: I0308 01:34:51.418605 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" event={"ID":"677ad41b-e2cb-4329-a014-3427d8ec936b","Type":"ContainerDied","Data":"54489358a690ae146cb875061a3d0d24fb2d6e7aff5d27c20886e8540fd8ebbc"} Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.042670 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.106917 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqfr\" (UniqueName: \"kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.106974 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107102 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107192 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107227 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107278 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107418 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.107445 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0\") pod \"677ad41b-e2cb-4329-a014-3427d8ec936b\" (UID: \"677ad41b-e2cb-4329-a014-3427d8ec936b\") " Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.121409 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.125802 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr" (OuterVolumeSpecName: "kube-api-access-njqfr") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "kube-api-access-njqfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.135928 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph" (OuterVolumeSpecName: "ceph") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.151293 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory" (OuterVolumeSpecName: "inventory") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.151632 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.165519 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.168509 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.173828 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "677ad41b-e2cb-4329-a014-3427d8ec936b" (UID: "677ad41b-e2cb-4329-a014-3427d8ec936b"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210505 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210546 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210560 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njqfr\" (UniqueName: \"kubernetes.io/projected/677ad41b-e2cb-4329-a014-3427d8ec936b-kube-api-access-njqfr\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210575 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210588 4762 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210602 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210614 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.210626 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/677ad41b-e2cb-4329-a014-3427d8ec936b-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.445260 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" event={"ID":"677ad41b-e2cb-4329-a014-3427d8ec936b","Type":"ContainerDied","Data":"4689a314afc8225bb405e8fe9501a03762fd624359e66f4e8bd216bbddc00cfb"} Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.445302 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4689a314afc8225bb405e8fe9501a03762fd624359e66f4e8bd216bbddc00cfb" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.445313 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.573283 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv"] Mar 08 01:34:53 crc kubenswrapper[4762]: E0308 01:34:53.574119 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19c6075f-11f7-41f5-a389-9695c2b37e66" containerName="oc" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574146 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="19c6075f-11f7-41f5-a389-9695c2b37e66" containerName="oc" Mar 08 01:34:53 crc kubenswrapper[4762]: E0308 01:34:53.574165 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="extract-utilities" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574174 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="extract-utilities" Mar 08 01:34:53 crc kubenswrapper[4762]: E0308 01:34:53.574191 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="registry-server" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574198 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="registry-server" Mar 08 01:34:53 crc kubenswrapper[4762]: E0308 01:34:53.574212 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677ad41b-e2cb-4329-a014-3427d8ec936b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574221 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="677ad41b-e2cb-4329-a014-3427d8ec936b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:34:53 crc kubenswrapper[4762]: E0308 01:34:53.574241 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="extract-content" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574248 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="extract-content" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574502 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="677ad41b-e2cb-4329-a014-3427d8ec936b" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574519 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="660c052f-1ed5-492d-8f20-01ac3807eefb" containerName="registry-server" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.574546 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="19c6075f-11f7-41f5-a389-9695c2b37e66" containerName="oc" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.575455 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.577495 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-5cckz" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.578129 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.578667 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.578848 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.585621 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.586129 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.600473 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv"] Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621035 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621313 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gjt\" (UniqueName: \"kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621494 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621644 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621825 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.621925 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724504 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724621 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45gjt\" (UniqueName: \"kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724675 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724723 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724781 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.724805 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.729498 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.731133 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.733729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.734446 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.737025 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.756548 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gjt\" (UniqueName: \"kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt\") pod \"logging-edpm-deployment-openstack-edpm-ipam-mpgrv\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:53 crc kubenswrapper[4762]: I0308 01:34:53.910628 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:34:54 crc kubenswrapper[4762]: I0308 01:34:54.530539 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv"] Mar 08 01:34:54 crc kubenswrapper[4762]: W0308 01:34:54.543194 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52f8064_ed82_432d_9846_87e8a5282382.slice/crio-d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702 WatchSource:0}: Error finding container d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702: Status 404 returned error can't find the container with id d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702 Mar 08 01:34:55 crc kubenswrapper[4762]: I0308 01:34:55.469366 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" event={"ID":"b52f8064-ed82-432d-9846-87e8a5282382","Type":"ContainerStarted","Data":"ffc9227200a257a771eef645a2be7e66d79e216e37ade0c2a3afa12ca7431e83"} Mar 08 01:34:55 crc kubenswrapper[4762]: I0308 01:34:55.469661 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" event={"ID":"b52f8064-ed82-432d-9846-87e8a5282382","Type":"ContainerStarted","Data":"d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702"} Mar 08 01:34:55 crc kubenswrapper[4762]: I0308 01:34:55.510991 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" podStartSLOduration=2.068402643 podStartE2EDuration="2.510967734s" podCreationTimestamp="2026-03-08 01:34:53 +0000 UTC" firstStartedPulling="2026-03-08 01:34:54.546633444 +0000 UTC m=+4316.020777788" lastFinishedPulling="2026-03-08 01:34:54.989198495 +0000 UTC m=+4316.463342879" observedRunningTime="2026-03-08 01:34:55.491267288 +0000 UTC m=+4316.965411662" watchObservedRunningTime="2026-03-08 01:34:55.510967734 +0000 UTC m=+4316.985112118" Mar 08 01:35:08 crc kubenswrapper[4762]: I0308 01:35:08.667310 4762 generic.go:334] "Generic (PLEG): container finished" podID="b52f8064-ed82-432d-9846-87e8a5282382" containerID="ffc9227200a257a771eef645a2be7e66d79e216e37ade0c2a3afa12ca7431e83" exitCode=0 Mar 08 01:35:08 crc kubenswrapper[4762]: I0308 01:35:08.667433 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" event={"ID":"b52f8064-ed82-432d-9846-87e8a5282382","Type":"ContainerDied","Data":"ffc9227200a257a771eef645a2be7e66d79e216e37ade0c2a3afa12ca7431e83"} Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.213089 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221579 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221715 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45gjt\" (UniqueName: \"kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221774 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221906 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221941 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.221966 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0\") pod \"b52f8064-ed82-432d-9846-87e8a5282382\" (UID: \"b52f8064-ed82-432d-9846-87e8a5282382\") " Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.228877 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt" (OuterVolumeSpecName: "kube-api-access-45gjt") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "kube-api-access-45gjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.228913 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph" (OuterVolumeSpecName: "ceph") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.268448 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.275443 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.289455 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory" (OuterVolumeSpecName: "inventory") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.291613 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "b52f8064-ed82-432d-9846-87e8a5282382" (UID: "b52f8064-ed82-432d-9846-87e8a5282382"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325028 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325092 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325107 4762 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325175 4762 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-inventory\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325314 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45gjt\" (UniqueName: \"kubernetes.io/projected/b52f8064-ed82-432d-9846-87e8a5282382-kube-api-access-45gjt\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.325333 4762 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b52f8064-ed82-432d-9846-87e8a5282382-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.691918 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" event={"ID":"b52f8064-ed82-432d-9846-87e8a5282382","Type":"ContainerDied","Data":"d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702"} Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.691970 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d17caeac6f8c0d296e633b25bdf9ddc9423bfdadb89c7bf2b78c253750edb702" Mar 08 01:35:10 crc kubenswrapper[4762]: I0308 01:35:10.691977 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-mpgrv" Mar 08 01:35:10 crc kubenswrapper[4762]: E0308 01:35:10.768063 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb52f8064_ed82_432d_9846_87e8a5282382.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.771950 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 08 01:35:26 crc kubenswrapper[4762]: E0308 01:35:26.774237 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b52f8064-ed82-432d-9846-87e8a5282382" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.774272 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b52f8064-ed82-432d-9846-87e8a5282382" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.774521 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b52f8064-ed82-432d-9846-87e8a5282382" containerName="logging-edpm-deployment-openstack-edpm-ipam" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.779678 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.784181 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.786149 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.819380 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.891980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892023 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892089 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892151 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-run\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892184 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892209 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892226 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892260 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892284 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892302 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892330 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzsl\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-kube-api-access-2fzsl\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892353 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892370 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892391 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.892418 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.901714 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.903513 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.905500 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.911027 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994301 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtfn2\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-kube-api-access-jtfn2\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994350 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994380 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994397 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994434 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-scripts\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994449 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-run\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994470 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-ceph\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994485 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-run\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994514 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-run\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994524 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994752 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994827 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994866 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994903 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-dev\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994963 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.994989 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995012 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995090 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995126 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995165 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995200 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995268 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995319 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995373 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995395 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995428 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzsl\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-kube-api-access-2fzsl\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995479 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995511 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995597 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995638 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-sys\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995683 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995773 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995794 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.995817 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996008 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-dev\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996040 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-sys\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996107 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996140 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996174 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996195 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:26 crc kubenswrapper[4762]: I0308 01:35:26.996423 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8eea7cf3-6a5e-4661-a544-a48ebc424a89-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.000517 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.001848 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.002803 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.004456 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.008902 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8eea7cf3-6a5e-4661-a544-a48ebc424a89-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.013855 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzsl\" (UniqueName: \"kubernetes.io/projected/8eea7cf3-6a5e-4661-a544-a48ebc424a89-kube-api-access-2fzsl\") pod \"cinder-volume-volume1-0\" (UID: \"8eea7cf3-6a5e-4661-a544-a48ebc424a89\") " pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097718 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097799 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097833 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-dev\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097876 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097919 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097943 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097943 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097967 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.097941 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-dev\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098015 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098193 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098219 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098295 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-sys\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098329 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098367 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-lib-modules\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098382 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098403 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-sys\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098412 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtfn2\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-kube-api-access-jtfn2\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098452 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098453 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098490 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-scripts\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098498 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098520 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-ceph\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098534 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-run\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098723 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-run\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.098402 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-etc-nvme\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.101356 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.102627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.103085 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-scripts\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.108397 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-ceph\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.108475 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data-custom\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.109220 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-config-data\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.120627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtfn2\" (UniqueName: \"kubernetes.io/projected/3fa8be70-ca35-4c49-867c-43a10b8f6f8e-kube-api-access-jtfn2\") pod \"cinder-backup-0\" (UID: \"3fa8be70-ca35-4c49-867c-43a10b8f6f8e\") " pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.233011 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.658566 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.660604 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.665139 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.665348 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wrkbl" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.665632 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.673549 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.675454 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-ldh47"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.679957 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.701600 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.717155 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ldh47"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.748474 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.748544 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749005 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749068 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749103 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749156 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btstq\" (UniqueName: \"kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749269 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749315 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749413 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnt8r\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749442 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.749468 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.754828 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.758433 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.777045 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.777371 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.777469 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.843821 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-31d0-account-create-update-pldhf"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.845301 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.850054 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852113 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852232 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852274 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852297 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852327 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852356 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btstq\" (UniqueName: \"kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852408 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852448 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852472 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852495 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852525 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852547 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkmtq\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852581 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852624 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnt8r\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852648 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852673 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.852699 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.854705 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.854873 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.854916 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.854997 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.855040 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.856080 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.857296 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.865700 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.866518 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.866967 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.867424 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.868394 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.872056 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-31d0-account-create-update-pldhf"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.885588 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btstq\" (UniqueName: \"kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq\") pod \"manila-db-create-ldh47\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " pod="openstack/manila-db-create-ldh47" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.885706 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnt8r\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.926061 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.955094 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.956970 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957083 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957139 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957162 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42w64\" (UniqueName: \"kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957199 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957257 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957284 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957318 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957345 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkmtq\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957396 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.957440 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.958872 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.959430 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.960308 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.978544 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.979880 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkmtq\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.980170 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.981481 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.981997 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.985243 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.985322 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.987082 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ts7j2" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.988720 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8eea7cf3-6a5e-4661-a544-a48ebc424a89","Type":"ContainerStarted","Data":"f0f1e6f829f7f252ba506655d05c5719c1ebd00531be7ac5dbb578af28c6cd32"} Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.988793 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 08 01:35:27 crc kubenswrapper[4762]: I0308 01:35:27.994153 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.013777 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ldh47" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.034466 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.057896 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:28 crc kubenswrapper[4762]: E0308 01:35:28.058970 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceph combined-ca-bundle config-data glance scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="1464b0ac-97e6-4e2f-b323-bb9d5aae8072" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.059691 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.059816 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.059899 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.060022 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xfcl\" (UniqueName: \"kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.060139 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42w64\" (UniqueName: \"kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.060256 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.060341 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.060509 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.085915 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.095489 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.097357 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.106153 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.162593 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.162676 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.162723 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2q2r\" (UniqueName: \"kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.162749 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.162785 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163012 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xfcl\" (UniqueName: \"kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163132 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163432 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163543 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163575 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.163781 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.164426 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.165663 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.265847 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.265967 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.266105 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2q2r\" (UniqueName: \"kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.266143 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.266166 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.267000 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.267042 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.267661 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.498553 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.498699 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.500165 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.500825 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xfcl\" (UniqueName: \"kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.500884 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key\") pod \"horizon-7c87b746d7-v5s72\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.501250 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42w64\" (UniqueName: \"kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64\") pod \"manila-31d0-account-create-update-pldhf\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.501577 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.513100 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2q2r\" (UniqueName: \"kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.513180 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key\") pod \"horizon-6d7d9f8bb9-rwxd8\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.526641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.660543 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.672126 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:28 crc kubenswrapper[4762]: I0308 01:35:28.779099 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.029728 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.030076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3fa8be70-ca35-4c49-867c-43a10b8f6f8e","Type":"ContainerStarted","Data":"e1a4099e8bc51e2a3b2ae24ab92227e23d0e0e87acacd911c5c942be44cd0b70"} Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.077611 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-ldh47"] Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.178155 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294511 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294575 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkmtq\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294608 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294632 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294651 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294700 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294721 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294782 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.294862 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs\") pod \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\" (UID: \"1464b0ac-97e6-4e2f-b323-bb9d5aae8072\") " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.295194 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs" (OuterVolumeSpecName: "logs") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.295490 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.298613 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.312434 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data" (OuterVolumeSpecName: "config-data") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.322390 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.323314 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts" (OuterVolumeSpecName: "scripts") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.324776 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq" (OuterVolumeSpecName: "kube-api-access-nkmtq") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "kube-api-access-nkmtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.327039 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.330470 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph" (OuterVolumeSpecName: "ceph") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.330513 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.339303 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1464b0ac-97e6-4e2f-b323-bb9d5aae8072" (UID: "1464b0ac-97e6-4e2f-b323-bb9d5aae8072"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400513 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400545 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkmtq\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-kube-api-access-nkmtq\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400555 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400563 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400588 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400598 4762 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400607 4762 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.400615 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/1464b0ac-97e6-4e2f-b323-bb9d5aae8072-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.430515 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.504347 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.535804 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:35:29 crc kubenswrapper[4762]: W0308 01:35:29.541833 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1675bf2f_acd6_4383_98d7_7c5fd92d9095.slice/crio-853e0f68bcdd6724db6277d42e7be5ae7e674221ef10b4ed1ed2eef30533a522 WatchSource:0}: Error finding container 853e0f68bcdd6724db6277d42e7be5ae7e674221ef10b4ed1ed2eef30533a522: Status 404 returned error can't find the container with id 853e0f68bcdd6724db6277d42e7be5ae7e674221ef10b4ed1ed2eef30533a522 Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.545358 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:35:29 crc kubenswrapper[4762]: W0308 01:35:29.552252 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c66c2c_537a_469a_958a_5edb6e67a8ab.slice/crio-b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32 WatchSource:0}: Error finding container b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32: Status 404 returned error can't find the container with id b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32 Mar 08 01:35:29 crc kubenswrapper[4762]: I0308 01:35:29.796681 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-31d0-account-create-update-pldhf"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.040977 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ldh47" event={"ID":"f2bd8503-1bcc-4cb0-9928-19f698eca2fd","Type":"ContainerStarted","Data":"651e64ffafe4b57fa615d3e794c411da3f0d9fd72c7fadc6f5f2d6e6c293950d"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.041017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ldh47" event={"ID":"f2bd8503-1bcc-4cb0-9928-19f698eca2fd","Type":"ContainerStarted","Data":"9fc13bf9e6d274e6b1c5516f1c0ac7eac92aa5d6e437972c77762634af8e9b90"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.044014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerStarted","Data":"0db78d1c8e8655feff4b04ec605523de2a40acac0f681ff873b4ba4a831b55aa"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.045812 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerStarted","Data":"b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.046618 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerStarted","Data":"853e0f68bcdd6724db6277d42e7be5ae7e674221ef10b4ed1ed2eef30533a522"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.048141 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.049680 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8eea7cf3-6a5e-4661-a544-a48ebc424a89","Type":"ContainerStarted","Data":"7312bba8475d2ba2ab481435d20d32e50330bf62a9f67ceee691272d2f4272b6"} Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.062525 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-create-ldh47" podStartSLOduration=3.062510385 podStartE2EDuration="3.062510385s" podCreationTimestamp="2026-03-08 01:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:35:30.059195876 +0000 UTC m=+4351.533340220" watchObservedRunningTime="2026-03-08 01:35:30.062510385 +0000 UTC m=+4351.536654729" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.611407 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.685586 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.687940 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.694199 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.759461 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.759630 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.759827 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.759950 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.760019 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.760053 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.760128 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7gft\" (UniqueName: \"kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.760395 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.783141 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.858354 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c45886cfb-v4xv2"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.861977 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863437 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863510 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863575 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863623 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863650 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863673 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.863705 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7gft\" (UniqueName: \"kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.868554 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.869526 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.870312 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.878529 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.883890 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.889978 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.893196 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7gft\" (UniqueName: \"kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft\") pod \"horizon-69fb85975b-kwm2b\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.953681 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c45886cfb-v4xv2"] Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.965713 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f395fca0-1bc0-43fe-aca6-4910f6ca3347-logs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.965950 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-scripts\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.966059 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-secret-key\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.966192 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-config-data\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.966306 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-combined-ca-bundle\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.966503 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-tls-certs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:30 crc kubenswrapper[4762]: I0308 01:35:30.966534 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tcfb\" (UniqueName: \"kubernetes.io/projected/f395fca0-1bc0-43fe-aca6-4910f6ca3347-kube-api-access-4tcfb\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069103 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f395fca0-1bc0-43fe-aca6-4910f6ca3347-logs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069676 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-scripts\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069718 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-secret-key\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069781 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-config-data\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069823 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-combined-ca-bundle\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069892 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-tls-certs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.069913 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tcfb\" (UniqueName: \"kubernetes.io/projected/f395fca0-1bc0-43fe-aca6-4910f6ca3347-kube-api-access-4tcfb\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.072634 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f395fca0-1bc0-43fe-aca6-4910f6ca3347-logs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.072708 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.073174 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-scripts\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.073984 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-secret-key\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.075934 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f395fca0-1bc0-43fe-aca6-4910f6ca3347-config-data\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.078152 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-horizon-tls-certs\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.079711 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-31d0-account-create-update-pldhf" event={"ID":"2ab27811-9a59-4997-8546-0b1bf6668150","Type":"ContainerStarted","Data":"38ac1faf9d9e3ae9945e316f61df5dfb96b9ae9b27e8bb81354577f7748b4de1"} Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.082085 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f395fca0-1bc0-43fe-aca6-4910f6ca3347-combined-ca-bundle\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.082961 4762 generic.go:334] "Generic (PLEG): container finished" podID="f2bd8503-1bcc-4cb0-9928-19f698eca2fd" containerID="651e64ffafe4b57fa615d3e794c411da3f0d9fd72c7fadc6f5f2d6e6c293950d" exitCode=0 Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.083017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ldh47" event={"ID":"f2bd8503-1bcc-4cb0-9928-19f698eca2fd","Type":"ContainerDied","Data":"651e64ffafe4b57fa615d3e794c411da3f0d9fd72c7fadc6f5f2d6e6c293950d"} Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.085853 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3fa8be70-ca35-4c49-867c-43a10b8f6f8e","Type":"ContainerStarted","Data":"5561f321e82514a4c6213e172f71c7c0e12d4481b063a5e8c26c2db81ddee599"} Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.090508 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tcfb\" (UniqueName: \"kubernetes.io/projected/f395fca0-1bc0-43fe-aca6-4910f6ca3347-kube-api-access-4tcfb\") pod \"horizon-6c45886cfb-v4xv2\" (UID: \"f395fca0-1bc0-43fe-aca6-4910f6ca3347\") " pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.090800 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.103861 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.105716 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.108522 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.109589 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.120706 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.193357 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-logs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.193461 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.193631 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194328 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194360 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194391 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-ceph\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194559 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194653 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhtv4\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-kube-api-access-hhtv4\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.194698 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.267279 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.268029 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.291600 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1464b0ac-97e6-4e2f-b323-bb9d5aae8072" path="/var/lib/kubelet/pods/1464b0ac-97e6-4e2f-b323-bb9d5aae8072/volumes" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298112 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298177 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298209 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-ceph\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298323 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298368 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhtv4\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-kube-api-access-hhtv4\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298396 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298435 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-logs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298488 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298605 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.298843 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.301275 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-logs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.303801 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56528a09-adcd-4337-80e8-3848a7cfa652-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.304580 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-ceph\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.306780 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.308583 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.320340 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-config-data\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.320614 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56528a09-adcd-4337-80e8-3848a7cfa652-scripts\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.322389 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhtv4\" (UniqueName: \"kubernetes.io/projected/56528a09-adcd-4337-80e8-3848a7cfa652-kube-api-access-hhtv4\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.549753 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"56528a09-adcd-4337-80e8-3848a7cfa652\") " pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.578143 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:31 crc kubenswrapper[4762]: I0308 01:35:31.901128 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c45886cfb-v4xv2"] Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.090171 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.104479 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3fa8be70-ca35-4c49-867c-43a10b8f6f8e","Type":"ContainerStarted","Data":"acfa7797d84e8b5d473460d12f911cf0d5801c5e292a2ff6bec3f9923875443c"} Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.113587 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerStarted","Data":"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7"} Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.134444 4762 generic.go:334] "Generic (PLEG): container finished" podID="2ab27811-9a59-4997-8546-0b1bf6668150" containerID="e69a94a3c8b780957ad9db67f7283275b306dc6ff6c897f36f1ac62d1417b4e7" exitCode=0 Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.134545 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-31d0-account-create-update-pldhf" event={"ID":"2ab27811-9a59-4997-8546-0b1bf6668150","Type":"ContainerDied","Data":"e69a94a3c8b780957ad9db67f7283275b306dc6ff6c897f36f1ac62d1417b4e7"} Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.141152 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=5.336059155 podStartE2EDuration="6.141137158s" podCreationTimestamp="2026-03-08 01:35:26 +0000 UTC" firstStartedPulling="2026-03-08 01:35:28.512694182 +0000 UTC m=+4349.986838526" lastFinishedPulling="2026-03-08 01:35:29.317772185 +0000 UTC m=+4350.791916529" observedRunningTime="2026-03-08 01:35:32.125677036 +0000 UTC m=+4353.599821400" watchObservedRunningTime="2026-03-08 01:35:32.141137158 +0000 UTC m=+4353.615281502" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.164158 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8eea7cf3-6a5e-4661-a544-a48ebc424a89","Type":"ContainerStarted","Data":"20206e4f1e7a3c120bd1774fc4943ebedb633ea6cffc9f5832db569a1011bde1"} Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.166926 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c45886cfb-v4xv2" event={"ID":"f395fca0-1bc0-43fe-aca6-4910f6ca3347","Type":"ContainerStarted","Data":"9a09e9659e57b3d7c86520ed162463d614c5fab3dc68c0e21a06bcdd0c13350f"} Mar 08 01:35:32 crc kubenswrapper[4762]: W0308 01:35:32.189035 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a158b3f_a655_4ba9_87a9_b74c44bbc54a.slice/crio-cdf57eda112d8c4243b32e85df38b1a188c0de6f78e8586d5a221a02d0119f33 WatchSource:0}: Error finding container cdf57eda112d8c4243b32e85df38b1a188c0de6f78e8586d5a221a02d0119f33: Status 404 returned error can't find the container with id cdf57eda112d8c4243b32e85df38b1a188c0de6f78e8586d5a221a02d0119f33 Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.201839 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=5.078015854 podStartE2EDuration="6.201818771s" podCreationTimestamp="2026-03-08 01:35:26 +0000 UTC" firstStartedPulling="2026-03-08 01:35:27.853918449 +0000 UTC m=+4349.328062793" lastFinishedPulling="2026-03-08 01:35:28.977721366 +0000 UTC m=+4350.451865710" observedRunningTime="2026-03-08 01:35:32.190154522 +0000 UTC m=+4353.664298876" watchObservedRunningTime="2026-03-08 01:35:32.201818771 +0000 UTC m=+4353.675963105" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.234486 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.456878 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.710290 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ldh47" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.794083 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btstq\" (UniqueName: \"kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq\") pod \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.794386 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts\") pod \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\" (UID: \"f2bd8503-1bcc-4cb0-9928-19f698eca2fd\") " Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.795785 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f2bd8503-1bcc-4cb0-9928-19f698eca2fd" (UID: "f2bd8503-1bcc-4cb0-9928-19f698eca2fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.801013 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq" (OuterVolumeSpecName: "kube-api-access-btstq") pod "f2bd8503-1bcc-4cb0-9928-19f698eca2fd" (UID: "f2bd8503-1bcc-4cb0-9928-19f698eca2fd"). InnerVolumeSpecName "kube-api-access-btstq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.898236 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btstq\" (UniqueName: \"kubernetes.io/projected/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-kube-api-access-btstq\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:32 crc kubenswrapper[4762]: I0308 01:35:32.898272 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f2bd8503-1bcc-4cb0-9928-19f698eca2fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.183947 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56528a09-adcd-4337-80e8-3848a7cfa652","Type":"ContainerStarted","Data":"2256a5ea5a2c0ef95ce5f4d3deecf935fb4b15fdf24ae52cf9db124566d9eac5"} Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.186396 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-ldh47" event={"ID":"f2bd8503-1bcc-4cb0-9928-19f698eca2fd","Type":"ContainerDied","Data":"9fc13bf9e6d274e6b1c5516f1c0ac7eac92aa5d6e437972c77762634af8e9b90"} Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.186417 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fc13bf9e6d274e6b1c5516f1c0ac7eac92aa5d6e437972c77762634af8e9b90" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.186442 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-ldh47" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.192171 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerStarted","Data":"cdf57eda112d8c4243b32e85df38b1a188c0de6f78e8586d5a221a02d0119f33"} Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.197565 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerStarted","Data":"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e"} Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.198468 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-log" containerID="cri-o://e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" gracePeriod=30 Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.198580 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-httpd" containerID="cri-o://23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" gracePeriod=30 Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.230224 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.230202906 podStartE2EDuration="7.230202906s" podCreationTimestamp="2026-03-08 01:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:35:33.21961087 +0000 UTC m=+4354.693755214" watchObservedRunningTime="2026-03-08 01:35:33.230202906 +0000 UTC m=+4354.704347250" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.653694 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.720267 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts\") pod \"2ab27811-9a59-4997-8546-0b1bf6668150\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.720446 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42w64\" (UniqueName: \"kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64\") pod \"2ab27811-9a59-4997-8546-0b1bf6668150\" (UID: \"2ab27811-9a59-4997-8546-0b1bf6668150\") " Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.721721 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2ab27811-9a59-4997-8546-0b1bf6668150" (UID: "2ab27811-9a59-4997-8546-0b1bf6668150"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.735877 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64" (OuterVolumeSpecName: "kube-api-access-42w64") pod "2ab27811-9a59-4997-8546-0b1bf6668150" (UID: "2ab27811-9a59-4997-8546-0b1bf6668150"). InnerVolumeSpecName "kube-api-access-42w64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.825720 4762 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2ab27811-9a59-4997-8546-0b1bf6668150-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:33 crc kubenswrapper[4762]: I0308 01:35:33.825747 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42w64\" (UniqueName: \"kubernetes.io/projected/2ab27811-9a59-4997-8546-0b1bf6668150-kube-api-access-42w64\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.092382 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.211922 4762 generic.go:334] "Generic (PLEG): container finished" podID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerID="23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" exitCode=0 Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.211951 4762 generic.go:334] "Generic (PLEG): container finished" podID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerID="e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" exitCode=143 Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.211985 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.212003 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerDied","Data":"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e"} Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.212030 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerDied","Data":"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7"} Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.212042 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9275bc7c-03c2-4ef8-9112-29b4e80555e3","Type":"ContainerDied","Data":"0db78d1c8e8655feff4b04ec605523de2a40acac0f681ff873b4ba4a831b55aa"} Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.212059 4762 scope.go:117] "RemoveContainer" containerID="23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.218644 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56528a09-adcd-4337-80e8-3848a7cfa652","Type":"ContainerStarted","Data":"a78d250337a95c23b3037b128fc0d46b1693cc194bf7dc72246c0a51f96cd93b"} Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.221161 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-31d0-account-create-update-pldhf" event={"ID":"2ab27811-9a59-4997-8546-0b1bf6668150","Type":"ContainerDied","Data":"38ac1faf9d9e3ae9945e316f61df5dfb96b9ae9b27e8bb81354577f7748b4de1"} Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.221192 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38ac1faf9d9e3ae9945e316f61df5dfb96b9ae9b27e8bb81354577f7748b4de1" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.221209 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-31d0-account-create-update-pldhf" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235125 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235217 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235276 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235293 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235346 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235379 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235401 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235425 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnt8r\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.235689 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.236260 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.237418 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs" (OuterVolumeSpecName: "logs") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.242659 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r" (OuterVolumeSpecName: "kube-api-access-mnt8r") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "kube-api-access-mnt8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.243377 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.244269 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph" (OuterVolumeSpecName: "ceph") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.252481 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts" (OuterVolumeSpecName: "scripts") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.256357 4762 scope.go:117] "RemoveContainer" containerID="e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.338388 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.338686 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") pod \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\" (UID: \"9275bc7c-03c2-4ef8-9112-29b4e80555e3\") " Mar 08 01:35:34 crc kubenswrapper[4762]: W0308 01:35:34.338849 4762 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/9275bc7c-03c2-4ef8-9112-29b4e80555e3/volumes/kubernetes.io~secret/combined-ca-bundle Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.338874 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339810 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339831 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339842 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339862 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339875 4762 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9275bc7c-03c2-4ef8-9112-29b4e80555e3-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339885 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnt8r\" (UniqueName: \"kubernetes.io/projected/9275bc7c-03c2-4ef8-9112-29b4e80555e3-kube-api-access-mnt8r\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.339896 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.366662 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data" (OuterVolumeSpecName: "config-data") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.383190 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.389443 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9275bc7c-03c2-4ef8-9112-29b4e80555e3" (UID: "9275bc7c-03c2-4ef8-9112-29b4e80555e3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.441641 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.441672 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.441683 4762 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9275bc7c-03c2-4ef8-9112-29b4e80555e3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.495926 4762 scope.go:117] "RemoveContainer" containerID="23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.497268 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e\": container with ID starting with 23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e not found: ID does not exist" containerID="23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.497310 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e"} err="failed to get container status \"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e\": rpc error: code = NotFound desc = could not find container \"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e\": container with ID starting with 23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e not found: ID does not exist" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.497387 4762 scope.go:117] "RemoveContainer" containerID="e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.497890 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7\": container with ID starting with e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7 not found: ID does not exist" containerID="e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.497927 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7"} err="failed to get container status \"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7\": rpc error: code = NotFound desc = could not find container \"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7\": container with ID starting with e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7 not found: ID does not exist" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.497965 4762 scope.go:117] "RemoveContainer" containerID="23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.498253 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e"} err="failed to get container status \"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e\": rpc error: code = NotFound desc = could not find container \"23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e\": container with ID starting with 23fbf1811977452a9fbbb126f231cccd9408d0b475b4c748a7563be1d578d47e not found: ID does not exist" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.498294 4762 scope.go:117] "RemoveContainer" containerID="e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.498919 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7"} err="failed to get container status \"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7\": rpc error: code = NotFound desc = could not find container \"e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7\": container with ID starting with e3c4344e463502aa57d0e9c3cafe8ac31534c54932fee5aa0c76305850c4eda7 not found: ID does not exist" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.571135 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.590954 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.600275 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.600822 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bd8503-1bcc-4cb0-9928-19f698eca2fd" containerName="mariadb-database-create" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.600840 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bd8503-1bcc-4cb0-9928-19f698eca2fd" containerName="mariadb-database-create" Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.600855 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ab27811-9a59-4997-8546-0b1bf6668150" containerName="mariadb-account-create-update" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.600862 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ab27811-9a59-4997-8546-0b1bf6668150" containerName="mariadb-account-create-update" Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.600880 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-log" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.600887 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-log" Mar 08 01:35:34 crc kubenswrapper[4762]: E0308 01:35:34.600903 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-httpd" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.600909 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-httpd" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.601107 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-log" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.601117 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ab27811-9a59-4997-8546-0b1bf6668150" containerName="mariadb-account-create-update" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.601133 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bd8503-1bcc-4cb0-9928-19f698eca2fd" containerName="mariadb-database-create" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.601145 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" containerName="glance-httpd" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.602546 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.607236 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.607451 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.609338 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.659641 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h72s\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-kube-api-access-7h72s\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.659883 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.659932 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.659978 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.660017 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-logs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.660057 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.660077 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.660133 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.660161 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-ceph\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763338 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h72s\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-kube-api-access-7h72s\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763505 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763540 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763576 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763609 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-logs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763642 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763657 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763716 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-ceph\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.763731 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.764355 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.764693 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.767188 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/426654cc-0d6c-4a1c-8614-2e7be9e750fe-logs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.772107 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-scripts\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.772114 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-config-data\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.772563 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-ceph\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.784178 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.784781 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/426654cc-0d6c-4a1c-8614-2e7be9e750fe-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.790351 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h72s\" (UniqueName: \"kubernetes.io/projected/426654cc-0d6c-4a1c-8614-2e7be9e750fe-kube-api-access-7h72s\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.805579 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"426654cc-0d6c-4a1c-8614-2e7be9e750fe\") " pod="openstack/glance-default-external-api-0" Mar 08 01:35:34 crc kubenswrapper[4762]: I0308 01:35:34.935642 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 08 01:35:35 crc kubenswrapper[4762]: I0308 01:35:35.250299 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"56528a09-adcd-4337-80e8-3848a7cfa652","Type":"ContainerStarted","Data":"de2f98d289b16c45a4b04c5a800a38f12edd7b9c61e35745063cba7acebd40e8"} Mar 08 01:35:35 crc kubenswrapper[4762]: I0308 01:35:35.274906 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.274890284 podStartE2EDuration="4.274890284s" podCreationTimestamp="2026-03-08 01:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:35:35.271409671 +0000 UTC m=+4356.745554015" watchObservedRunningTime="2026-03-08 01:35:35.274890284 +0000 UTC m=+4356.749034628" Mar 08 01:35:35 crc kubenswrapper[4762]: I0308 01:35:35.292791 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9275bc7c-03c2-4ef8-9112-29b4e80555e3" path="/var/lib/kubelet/pods/9275bc7c-03c2-4ef8-9112-29b4e80555e3/volumes" Mar 08 01:35:35 crc kubenswrapper[4762]: I0308 01:35:35.653724 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 08 01:35:35 crc kubenswrapper[4762]: W0308 01:35:35.656071 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod426654cc_0d6c_4a1c_8614_2e7be9e750fe.slice/crio-d37c247cc6dc336ae51d9253edd42fff9e3da80353127c1be56173dfd370a1ae WatchSource:0}: Error finding container d37c247cc6dc336ae51d9253edd42fff9e3da80353127c1be56173dfd370a1ae: Status 404 returned error can't find the container with id d37c247cc6dc336ae51d9253edd42fff9e3da80353127c1be56173dfd370a1ae Mar 08 01:35:36 crc kubenswrapper[4762]: I0308 01:35:36.273708 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"426654cc-0d6c-4a1c-8614-2e7be9e750fe","Type":"ContainerStarted","Data":"86a5133cc1181762d914a2a110c0276688c7bc3690ebaf2f568fa6ecdd349631"} Mar 08 01:35:36 crc kubenswrapper[4762]: I0308 01:35:36.274324 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"426654cc-0d6c-4a1c-8614-2e7be9e750fe","Type":"ContainerStarted","Data":"d37c247cc6dc336ae51d9253edd42fff9e3da80353127c1be56173dfd370a1ae"} Mar 08 01:35:37 crc kubenswrapper[4762]: I0308 01:35:37.102727 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:37 crc kubenswrapper[4762]: I0308 01:35:37.292719 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Mar 08 01:35:37 crc kubenswrapper[4762]: I0308 01:35:37.470306 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.163394 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-jm7mn"] Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.165789 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.200636 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.200898 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-9c5m6" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.218355 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jm7mn"] Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.247111 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.247215 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq42p\" (UniqueName: \"kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.247282 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.247735 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.349227 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.349303 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq42p\" (UniqueName: \"kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.349350 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.349457 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.358893 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.359138 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.359076 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.369186 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq42p\" (UniqueName: \"kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p\") pod \"manila-db-sync-jm7mn\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:38 crc kubenswrapper[4762]: I0308 01:35:38.529966 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:41 crc kubenswrapper[4762]: I0308 01:35:41.579172 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:41 crc kubenswrapper[4762]: I0308 01:35:41.579657 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:41 crc kubenswrapper[4762]: I0308 01:35:41.634322 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:41 crc kubenswrapper[4762]: I0308 01:35:41.640217 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:41 crc kubenswrapper[4762]: I0308 01:35:41.845813 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-jm7mn"] Mar 08 01:35:41 crc kubenswrapper[4762]: W0308 01:35:41.851151 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf263492e_5989_410e_875a_3857b7821aeb.slice/crio-2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8 WatchSource:0}: Error finding container 2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8: Status 404 returned error can't find the container with id 2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8 Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.373279 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerStarted","Data":"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.373329 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerStarted","Data":"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.375568 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerStarted","Data":"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.375608 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerStarted","Data":"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.375699 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d7d9f8bb9-rwxd8" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon-log" containerID="cri-o://573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" gracePeriod=30 Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.375704 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6d7d9f8bb9-rwxd8" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon" containerID="cri-o://d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" gracePeriod=30 Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.379283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerStarted","Data":"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.379332 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerStarted","Data":"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.379405 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c87b746d7-v5s72" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon-log" containerID="cri-o://c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" gracePeriod=30 Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.379443 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7c87b746d7-v5s72" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon" containerID="cri-o://4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" gracePeriod=30 Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.383951 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jm7mn" event={"ID":"f263492e-5989-410e-875a-3857b7821aeb","Type":"ContainerStarted","Data":"2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.386868 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c45886cfb-v4xv2" event={"ID":"f395fca0-1bc0-43fe-aca6-4910f6ca3347","Type":"ContainerStarted","Data":"f80541e4358db2b91d79a49462255d802e7df379dfd0a47ab37aaecd74479352"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.386913 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c45886cfb-v4xv2" event={"ID":"f395fca0-1bc0-43fe-aca6-4910f6ca3347","Type":"ContainerStarted","Data":"ffd043e0a8b6b7b19d6cbb89797f1e46e87d0ea19d71543583e28a775d0122af"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.398133 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-69fb85975b-kwm2b" podStartSLOduration=3.127766736 podStartE2EDuration="12.398115524s" podCreationTimestamp="2026-03-08 01:35:30 +0000 UTC" firstStartedPulling="2026-03-08 01:35:32.200541873 +0000 UTC m=+4353.674686227" lastFinishedPulling="2026-03-08 01:35:41.470890681 +0000 UTC m=+4362.945035015" observedRunningTime="2026-03-08 01:35:42.396252599 +0000 UTC m=+4363.870396943" watchObservedRunningTime="2026-03-08 01:35:42.398115524 +0000 UTC m=+4363.872259868" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.402872 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"426654cc-0d6c-4a1c-8614-2e7be9e750fe","Type":"ContainerStarted","Data":"32d601a4dd2bdd49a396226a91c7b7b6550131a04dc8d230a2f8a0d9f151c5e7"} Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.403298 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.403330 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.430572 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7c87b746d7-v5s72" podStartSLOduration=3.57146089 podStartE2EDuration="15.430553683s" podCreationTimestamp="2026-03-08 01:35:27 +0000 UTC" firstStartedPulling="2026-03-08 01:35:29.545286682 +0000 UTC m=+4351.019431016" lastFinishedPulling="2026-03-08 01:35:41.404379465 +0000 UTC m=+4362.878523809" observedRunningTime="2026-03-08 01:35:42.411249516 +0000 UTC m=+4363.885393860" watchObservedRunningTime="2026-03-08 01:35:42.430553683 +0000 UTC m=+4363.904698027" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.436404 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c45886cfb-v4xv2" podStartSLOduration=3.019709356 podStartE2EDuration="12.436391847s" podCreationTimestamp="2026-03-08 01:35:30 +0000 UTC" firstStartedPulling="2026-03-08 01:35:32.008214147 +0000 UTC m=+4353.482358491" lastFinishedPulling="2026-03-08 01:35:41.424896648 +0000 UTC m=+4362.899040982" observedRunningTime="2026-03-08 01:35:42.434996456 +0000 UTC m=+4363.909140800" watchObservedRunningTime="2026-03-08 01:35:42.436391847 +0000 UTC m=+4363.910536191" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.459310 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d7d9f8bb9-rwxd8" podStartSLOduration=2.595968773 podStartE2EDuration="14.459293592s" podCreationTimestamp="2026-03-08 01:35:28 +0000 UTC" firstStartedPulling="2026-03-08 01:35:29.561067474 +0000 UTC m=+4351.035211818" lastFinishedPulling="2026-03-08 01:35:41.424392293 +0000 UTC m=+4362.898536637" observedRunningTime="2026-03-08 01:35:42.454256081 +0000 UTC m=+4363.928400425" watchObservedRunningTime="2026-03-08 01:35:42.459293592 +0000 UTC m=+4363.933437936" Mar 08 01:35:42 crc kubenswrapper[4762]: I0308 01:35:42.487609 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.487590117 podStartE2EDuration="8.487590117s" podCreationTimestamp="2026-03-08 01:35:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:35:42.479065543 +0000 UTC m=+4363.953209887" watchObservedRunningTime="2026-03-08 01:35:42.487590117 +0000 UTC m=+4363.961734461" Mar 08 01:35:44 crc kubenswrapper[4762]: I0308 01:35:44.422627 4762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 01:35:44 crc kubenswrapper[4762]: I0308 01:35:44.423101 4762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 01:35:44 crc kubenswrapper[4762]: I0308 01:35:44.936114 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 08 01:35:44 crc kubenswrapper[4762]: I0308 01:35:44.936166 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 08 01:35:45 crc kubenswrapper[4762]: I0308 01:35:45.017599 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 08 01:35:45 crc kubenswrapper[4762]: I0308 01:35:45.023883 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 08 01:35:45 crc kubenswrapper[4762]: I0308 01:35:45.431684 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 08 01:35:45 crc kubenswrapper[4762]: I0308 01:35:45.431730 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 08 01:35:46 crc kubenswrapper[4762]: I0308 01:35:46.066148 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:46 crc kubenswrapper[4762]: I0308 01:35:46.066424 4762 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 08 01:35:46 crc kubenswrapper[4762]: I0308 01:35:46.070173 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 08 01:35:47 crc kubenswrapper[4762]: I0308 01:35:47.513726 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 08 01:35:48 crc kubenswrapper[4762]: I0308 01:35:48.505832 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jm7mn" event={"ID":"f263492e-5989-410e-875a-3857b7821aeb","Type":"ContainerStarted","Data":"e87c72a33d554c645866f865993767b63284c2c71b7e5d745beff904c56f386e"} Mar 08 01:35:48 crc kubenswrapper[4762]: I0308 01:35:48.536724 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-jm7mn" podStartSLOduration=5.158837712 podStartE2EDuration="10.536696105s" podCreationTimestamp="2026-03-08 01:35:38 +0000 UTC" firstStartedPulling="2026-03-08 01:35:41.861679038 +0000 UTC m=+4363.335823382" lastFinishedPulling="2026-03-08 01:35:47.239537431 +0000 UTC m=+4368.713681775" observedRunningTime="2026-03-08 01:35:48.528675405 +0000 UTC m=+4370.002819799" watchObservedRunningTime="2026-03-08 01:35:48.536696105 +0000 UTC m=+4370.010840489" Mar 08 01:35:48 crc kubenswrapper[4762]: I0308 01:35:48.662345 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:35:48 crc kubenswrapper[4762]: I0308 01:35:48.672565 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:35:49 crc kubenswrapper[4762]: I0308 01:35:49.387916 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 08 01:35:51 crc kubenswrapper[4762]: I0308 01:35:51.276409 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:51 crc kubenswrapper[4762]: I0308 01:35:51.276681 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:35:51 crc kubenswrapper[4762]: I0308 01:35:51.276691 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:51 crc kubenswrapper[4762]: I0308 01:35:51.276700 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:35:56 crc kubenswrapper[4762]: I0308 01:35:56.615674 4762 generic.go:334] "Generic (PLEG): container finished" podID="f263492e-5989-410e-875a-3857b7821aeb" containerID="e87c72a33d554c645866f865993767b63284c2c71b7e5d745beff904c56f386e" exitCode=0 Mar 08 01:35:56 crc kubenswrapper[4762]: I0308 01:35:56.615876 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jm7mn" event={"ID":"f263492e-5989-410e-875a-3857b7821aeb","Type":"ContainerDied","Data":"e87c72a33d554c645866f865993767b63284c2c71b7e5d745beff904c56f386e"} Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.283624 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.392585 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data\") pod \"f263492e-5989-410e-875a-3857b7821aeb\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.393166 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle\") pod \"f263492e-5989-410e-875a-3857b7821aeb\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.393327 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data\") pod \"f263492e-5989-410e-875a-3857b7821aeb\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.393407 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq42p\" (UniqueName: \"kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p\") pod \"f263492e-5989-410e-875a-3857b7821aeb\" (UID: \"f263492e-5989-410e-875a-3857b7821aeb\") " Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.400311 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p" (OuterVolumeSpecName: "kube-api-access-xq42p") pod "f263492e-5989-410e-875a-3857b7821aeb" (UID: "f263492e-5989-410e-875a-3857b7821aeb"). InnerVolumeSpecName "kube-api-access-xq42p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.416912 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "f263492e-5989-410e-875a-3857b7821aeb" (UID: "f263492e-5989-410e-875a-3857b7821aeb"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.416931 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data" (OuterVolumeSpecName: "config-data") pod "f263492e-5989-410e-875a-3857b7821aeb" (UID: "f263492e-5989-410e-875a-3857b7821aeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.460353 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f263492e-5989-410e-875a-3857b7821aeb" (UID: "f263492e-5989-410e-875a-3857b7821aeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.496143 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.496178 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq42p\" (UniqueName: \"kubernetes.io/projected/f263492e-5989-410e-875a-3857b7821aeb-kube-api-access-xq42p\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.496189 4762 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-job-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.496199 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f263492e-5989-410e-875a-3857b7821aeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.641608 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-jm7mn" event={"ID":"f263492e-5989-410e-875a-3857b7821aeb","Type":"ContainerDied","Data":"2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8"} Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.641678 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2effb3b41236c32055aaf48e26e4e0f3cd6f955b156eb1fdfe98fa0d8f3a1aa8" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.641785 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-jm7mn" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.906227 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:35:58 crc kubenswrapper[4762]: E0308 01:35:58.910531 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f263492e-5989-410e-875a-3857b7821aeb" containerName="manila-db-sync" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.910568 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f263492e-5989-410e-875a-3857b7821aeb" containerName="manila-db-sync" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.910937 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f263492e-5989-410e-875a-3857b7821aeb" containerName="manila-db-sync" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.912204 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.924389 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-9c5m6" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.924727 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.924750 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.925257 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.927065 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.996828 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:35:58 crc kubenswrapper[4762]: I0308 01:35:58.998788 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.008163 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.009447 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.009503 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.016286 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.021070 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9rx\" (UniqueName: \"kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.021361 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.021429 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.021547 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124036 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124356 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124379 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124504 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124529 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j885l\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124576 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124602 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124934 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.124978 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125043 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125075 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125104 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr9rx\" (UniqueName: \"kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125127 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125151 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.125244 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.138776 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.140222 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.147249 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.159461 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.159856 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr9rx\" (UniqueName: \"kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx\") pod \"manila-scheduler-0\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.162917 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c8d8d886c-75966"] Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.164718 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228083 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-nb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228133 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-config\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228168 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228191 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j885l\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228219 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228241 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228287 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-sb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228322 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-openstack-edpm-ipam\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228347 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228368 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228390 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfb5g\" (UniqueName: \"kubernetes.io/projected/328fe58c-2b7a-4e69-8103-0d7dcf57d008-kube-api-access-zfb5g\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228425 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-swift-storage-0\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228476 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228542 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-svc\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.228563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.231627 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.231681 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.233050 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.233849 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.239608 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.240604 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.249783 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.250382 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.257019 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-9c5m6" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.266922 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.268289 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j885l\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l\") pod \"manila-share-share1-0\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.350235 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c8d8d886c-75966"] Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.350909 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.351972 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-nb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352233 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-config\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352342 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-sb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352394 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-openstack-edpm-ipam\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352425 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfb5g\" (UniqueName: \"kubernetes.io/projected/328fe58c-2b7a-4e69-8103-0d7dcf57d008-kube-api-access-zfb5g\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352466 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-swift-storage-0\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.352596 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-svc\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.353637 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-svc\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.354418 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-nb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.354967 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-config\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.356373 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-ovsdbserver-sb\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.357185 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-openstack-edpm-ipam\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.358564 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/328fe58c-2b7a-4e69-8103-0d7dcf57d008-dns-swift-storage-0\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.466033 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfb5g\" (UniqueName: \"kubernetes.io/projected/328fe58c-2b7a-4e69-8103-0d7dcf57d008-kube-api-access-zfb5g\") pod \"dnsmasq-dns-c8d8d886c-75966\" (UID: \"328fe58c-2b7a-4e69-8103-0d7dcf57d008\") " pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.518342 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.520532 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.532185 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.549831 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666577 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666647 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666667 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666701 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666806 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbjlc\" (UniqueName: \"kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666948 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.666993 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.731909 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778378 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778428 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778445 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778463 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778527 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbjlc\" (UniqueName: \"kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778606 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.778642 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.779055 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.780207 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.787271 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.791306 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.797456 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.800246 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.813911 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbjlc\" (UniqueName: \"kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc\") pod \"manila-api-0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.871460 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:35:59 crc kubenswrapper[4762]: I0308 01:35:59.991194 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.169818 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548896-d4xzj"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.171349 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.176130 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.176259 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.176368 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.186418 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548896-d4xzj"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.281588 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.297109 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99ph5\" (UniqueName: \"kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5\") pod \"auto-csr-approver-29548896-d4xzj\" (UID: \"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4\") " pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.335597 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c8d8d886c-75966"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.399178 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99ph5\" (UniqueName: \"kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5\") pod \"auto-csr-approver-29548896-d4xzj\" (UID: \"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4\") " pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.601263 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99ph5\" (UniqueName: \"kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5\") pod \"auto-csr-approver-29548896-d4xzj\" (UID: \"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4\") " pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.631153 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:00 crc kubenswrapper[4762]: I0308 01:36:00.662604 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerStarted","Data":"652091f458d3fe25e9a8ccbd055db4bce322854b9591539a3ec315aad66ec054"} Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.052016 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.725155 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8d8d886c-75966" event={"ID":"328fe58c-2b7a-4e69-8103-0d7dcf57d008","Type":"ContainerStarted","Data":"85948fe7b31f0dd0490668e5cd0fd615912412e4e25f91d3fd1c05c39d366a2f"} Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.725714 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8d8d886c-75966" event={"ID":"328fe58c-2b7a-4e69-8103-0d7dcf57d008","Type":"ContainerStarted","Data":"ef324e3d84233959c2bf47d335ab1d1fd8925c105fc86e4800755f5164596b5a"} Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.770089 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerStarted","Data":"c04aff8312b052dd4775002b0a6370487e3ccc29ff8cdcea77dccb755006e1c5"} Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.773872 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerStarted","Data":"c7bc926a5180c8203cac35237b0bc16c9140bc70cab986435732338301ec8753"} Mar 08 01:36:01 crc kubenswrapper[4762]: I0308 01:36:01.882789 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548896-d4xzj"] Mar 08 01:36:02 crc kubenswrapper[4762]: I0308 01:36:02.527639 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:02 crc kubenswrapper[4762]: I0308 01:36:02.807862 4762 generic.go:334] "Generic (PLEG): container finished" podID="328fe58c-2b7a-4e69-8103-0d7dcf57d008" containerID="85948fe7b31f0dd0490668e5cd0fd615912412e4e25f91d3fd1c05c39d366a2f" exitCode=0 Mar 08 01:36:02 crc kubenswrapper[4762]: I0308 01:36:02.807919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8d8d886c-75966" event={"ID":"328fe58c-2b7a-4e69-8103-0d7dcf57d008","Type":"ContainerDied","Data":"85948fe7b31f0dd0490668e5cd0fd615912412e4e25f91d3fd1c05c39d366a2f"} Mar 08 01:36:02 crc kubenswrapper[4762]: I0308 01:36:02.812684 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" event={"ID":"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4","Type":"ContainerStarted","Data":"91a4b5b4385bb5ee144c061a3a22b507c6e4cbc1d79cb481d00ab43524c3bd34"} Mar 08 01:36:02 crc kubenswrapper[4762]: I0308 01:36:02.820931 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerStarted","Data":"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.831002 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" event={"ID":"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4","Type":"ContainerStarted","Data":"f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.835793 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerStarted","Data":"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.835862 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.835866 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api-log" containerID="cri-o://151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" gracePeriod=30 Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.835902 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api" containerID="cri-o://b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" gracePeriod=30 Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.838697 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerStarted","Data":"e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.838732 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerStarted","Data":"4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.842476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c8d8d886c-75966" event={"ID":"328fe58c-2b7a-4e69-8103-0d7dcf57d008","Type":"ContainerStarted","Data":"9e90f17be736a77a8f7f98f87d7ae6d94c8700834a0acd413f8ae4b9dc2cd245"} Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.842645 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.849735 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" podStartSLOduration=2.872322347 podStartE2EDuration="3.849722349s" podCreationTimestamp="2026-03-08 01:36:00 +0000 UTC" firstStartedPulling="2026-03-08 01:36:01.914208161 +0000 UTC m=+4383.388352505" lastFinishedPulling="2026-03-08 01:36:02.891608173 +0000 UTC m=+4384.365752507" observedRunningTime="2026-03-08 01:36:03.84677024 +0000 UTC m=+4385.320914584" watchObservedRunningTime="2026-03-08 01:36:03.849722349 +0000 UTC m=+4385.323866693" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.869539 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.86952361 podStartE2EDuration="4.86952361s" podCreationTimestamp="2026-03-08 01:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:36:03.863189561 +0000 UTC m=+4385.337333905" watchObservedRunningTime="2026-03-08 01:36:03.86952361 +0000 UTC m=+4385.343667954" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.908323 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c8d8d886c-75966" podStartSLOduration=4.908306989 podStartE2EDuration="4.908306989s" podCreationTimestamp="2026-03-08 01:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:36:03.901909167 +0000 UTC m=+4385.376053511" watchObservedRunningTime="2026-03-08 01:36:03.908306989 +0000 UTC m=+4385.382451333" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.916825 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.937741 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=4.806745566 podStartE2EDuration="5.937723077s" podCreationTimestamp="2026-03-08 01:35:58 +0000 UTC" firstStartedPulling="2026-03-08 01:36:00.014588367 +0000 UTC m=+4381.488732711" lastFinishedPulling="2026-03-08 01:36:01.145565878 +0000 UTC m=+4382.619710222" observedRunningTime="2026-03-08 01:36:03.927828162 +0000 UTC m=+4385.401972506" watchObservedRunningTime="2026-03-08 01:36:03.937723077 +0000 UTC m=+4385.411867421" Mar 08 01:36:03 crc kubenswrapper[4762]: I0308 01:36:03.978695 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.795808 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.853754 4762 generic.go:334] "Generic (PLEG): container finished" podID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerID="b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" exitCode=0 Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.853798 4762 generic.go:334] "Generic (PLEG): container finished" podID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerID="151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" exitCode=143 Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.854379 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerDied","Data":"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115"} Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.854455 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerDied","Data":"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e"} Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.854496 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"6af7df4a-f9ac-4a99-9e55-194eac2025f0","Type":"ContainerDied","Data":"c7bc926a5180c8203cac35237b0bc16c9140bc70cab986435732338301ec8753"} Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.854669 4762 scope.go:117] "RemoveContainer" containerID="b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.854926 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.900630 4762 scope.go:117] "RemoveContainer" containerID="151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.918704 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.918773 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.918836 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.918929 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.918958 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.919051 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.919177 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbjlc\" (UniqueName: \"kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc\") pod \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\" (UID: \"6af7df4a-f9ac-4a99-9e55-194eac2025f0\") " Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.919345 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs" (OuterVolumeSpecName: "logs") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.919639 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6af7df4a-f9ac-4a99-9e55-194eac2025f0-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.919830 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.927332 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts" (OuterVolumeSpecName: "scripts") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.929747 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc" (OuterVolumeSpecName: "kube-api-access-pbjlc") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "kube-api-access-pbjlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.944471 4762 scope.go:117] "RemoveContainer" containerID="b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" Mar 08 01:36:04 crc kubenswrapper[4762]: E0308 01:36:04.945991 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115\": container with ID starting with b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115 not found: ID does not exist" containerID="b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.946023 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115"} err="failed to get container status \"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115\": rpc error: code = NotFound desc = could not find container \"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115\": container with ID starting with b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115 not found: ID does not exist" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.946043 4762 scope.go:117] "RemoveContainer" containerID="151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.950353 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:04 crc kubenswrapper[4762]: E0308 01:36:04.950393 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e\": container with ID starting with 151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e not found: ID does not exist" containerID="151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.950506 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e"} err="failed to get container status \"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e\": rpc error: code = NotFound desc = could not find container \"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e\": container with ID starting with 151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e not found: ID does not exist" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.950554 4762 scope.go:117] "RemoveContainer" containerID="b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.954859 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115"} err="failed to get container status \"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115\": rpc error: code = NotFound desc = could not find container \"b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115\": container with ID starting with b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115 not found: ID does not exist" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.954889 4762 scope.go:117] "RemoveContainer" containerID="151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.961139 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e"} err="failed to get container status \"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e\": rpc error: code = NotFound desc = could not find container \"151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e\": container with ID starting with 151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e not found: ID does not exist" Mar 08 01:36:04 crc kubenswrapper[4762]: I0308 01:36:04.983731 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.022471 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbjlc\" (UniqueName: \"kubernetes.io/projected/6af7df4a-f9ac-4a99-9e55-194eac2025f0-kube-api-access-pbjlc\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.022498 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.022508 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.022517 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6af7df4a-f9ac-4a99-9e55-194eac2025f0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.022527 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.027953 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data" (OuterVolumeSpecName: "config-data") pod "6af7df4a-f9ac-4a99-9e55-194eac2025f0" (UID: "6af7df4a-f9ac-4a99-9e55-194eac2025f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.124714 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6af7df4a-f9ac-4a99-9e55-194eac2025f0-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.235302 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.244667 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.259419 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:05 crc kubenswrapper[4762]: E0308 01:36:05.259960 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api-log" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.259979 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api-log" Mar 08 01:36:05 crc kubenswrapper[4762]: E0308 01:36:05.260019 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.260028 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.260236 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api-log" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.260263 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" containerName="manila-api" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.261461 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.265612 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.265807 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.266054 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.283775 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af7df4a-f9ac-4a99-9e55-194eac2025f0" path="/var/lib/kubelet/pods/6af7df4a-f9ac-4a99-9e55-194eac2025f0/volumes" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.284461 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432155 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data-custom\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432223 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-scripts\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432261 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88cfd6da-0fda-4f8d-8611-d609aa5b3276-logs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432350 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-public-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432383 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432439 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432491 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88cfd6da-0fda-4f8d-8611-d609aa5b3276-etc-machine-id\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432528 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-internal-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.432559 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrwfx\" (UniqueName: \"kubernetes.io/projected/88cfd6da-0fda-4f8d-8611-d609aa5b3276-kube-api-access-xrwfx\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535117 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-public-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535177 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535218 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535268 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88cfd6da-0fda-4f8d-8611-d609aa5b3276-etc-machine-id\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535290 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-internal-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535315 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrwfx\" (UniqueName: \"kubernetes.io/projected/88cfd6da-0fda-4f8d-8611-d609aa5b3276-kube-api-access-xrwfx\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535374 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data-custom\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535406 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-scripts\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535711 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88cfd6da-0fda-4f8d-8611-d609aa5b3276-logs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.535409 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/88cfd6da-0fda-4f8d-8611-d609aa5b3276-etc-machine-id\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.536248 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/88cfd6da-0fda-4f8d-8611-d609aa5b3276-logs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.541464 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.543313 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-public-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.544668 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-internal-tls-certs\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.545164 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-scripts\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.552571 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.562429 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/88cfd6da-0fda-4f8d-8611-d609aa5b3276-config-data-custom\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.577308 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrwfx\" (UniqueName: \"kubernetes.io/projected/88cfd6da-0fda-4f8d-8611-d609aa5b3276-kube-api-access-xrwfx\") pod \"manila-api-0\" (UID: \"88cfd6da-0fda-4f8d-8611-d609aa5b3276\") " pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.593296 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.955256 4762 generic.go:334] "Generic (PLEG): container finished" podID="b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" containerID="f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3" exitCode=0 Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.955582 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" event={"ID":"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4","Type":"ContainerDied","Data":"f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3"} Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.971878 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.972206 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-central-agent" containerID="cri-o://5a3b9598d8c3d9367c7ff01d198b875e2be8ed0ff2556b7f1a37464a26e12fe7" gracePeriod=30 Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.972369 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="proxy-httpd" containerID="cri-o://b7af471a865b12dd686483a2062192e5a997f6bd613475cd6ac179e46b3ffae2" gracePeriod=30 Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.972425 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="sg-core" containerID="cri-o://240925bd14a1b5b2baa148f2e674073065cdc345e398514d38ceaaef3bcd9b5f" gracePeriod=30 Mar 08 01:36:05 crc kubenswrapper[4762]: I0308 01:36:05.972465 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-notification-agent" containerID="cri-o://0a1b14c854cb50902da15746f6462cf318ba092defa5ffd08d6746ef9847d470" gracePeriod=30 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.245930 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.389054 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.449514 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6c45886cfb-v4xv2" Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.510177 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.996878 4762 generic.go:334] "Generic (PLEG): container finished" podID="721da70c-0049-498f-927c-dadcd0867152" containerID="b7af471a865b12dd686483a2062192e5a997f6bd613475cd6ac179e46b3ffae2" exitCode=0 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997356 4762 generic.go:334] "Generic (PLEG): container finished" podID="721da70c-0049-498f-927c-dadcd0867152" containerID="240925bd14a1b5b2baa148f2e674073065cdc345e398514d38ceaaef3bcd9b5f" exitCode=2 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997367 4762 generic.go:334] "Generic (PLEG): container finished" podID="721da70c-0049-498f-927c-dadcd0867152" containerID="0a1b14c854cb50902da15746f6462cf318ba092defa5ffd08d6746ef9847d470" exitCode=0 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997378 4762 generic.go:334] "Generic (PLEG): container finished" podID="721da70c-0049-498f-927c-dadcd0867152" containerID="5a3b9598d8c3d9367c7ff01d198b875e2be8ed0ff2556b7f1a37464a26e12fe7" exitCode=0 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.996953 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerDied","Data":"b7af471a865b12dd686483a2062192e5a997f6bd613475cd6ac179e46b3ffae2"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997466 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerDied","Data":"240925bd14a1b5b2baa148f2e674073065cdc345e398514d38ceaaef3bcd9b5f"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997484 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerDied","Data":"0a1b14c854cb50902da15746f6462cf318ba092defa5ffd08d6746ef9847d470"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.997495 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerDied","Data":"5a3b9598d8c3d9367c7ff01d198b875e2be8ed0ff2556b7f1a37464a26e12fe7"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.998946 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"88cfd6da-0fda-4f8d-8611-d609aa5b3276","Type":"ContainerStarted","Data":"a23765ae1f172bfddf89797bde909e794fa7a02732cd5ef3fae71cbacb05c575"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.998985 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"88cfd6da-0fda-4f8d-8611-d609aa5b3276","Type":"ContainerStarted","Data":"b76aa1c86a611ab9c36f54ec3580a0eb3757b86727a1921b39aeaf0f43235a41"} Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.999073 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69fb85975b-kwm2b" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon-log" containerID="cri-o://813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2" gracePeriod=30 Mar 08 01:36:06 crc kubenswrapper[4762]: I0308 01:36:06.999137 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-69fb85975b-kwm2b" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" containerID="cri-o://4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70" gracePeriod=30 Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.060554 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.192700 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.192858 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.192927 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.192972 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193004 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghsxm\" (UniqueName: \"kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193075 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193080 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193118 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193144 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts\") pod \"721da70c-0049-498f-927c-dadcd0867152\" (UID: \"721da70c-0049-498f-927c-dadcd0867152\") " Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.193974 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.198315 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts" (OuterVolumeSpecName: "scripts") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.200925 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.261962 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm" (OuterVolumeSpecName: "kube-api-access-ghsxm") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "kube-api-access-ghsxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.287945 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.303577 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/721da70c-0049-498f-927c-dadcd0867152-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.303608 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.303619 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghsxm\" (UniqueName: \"kubernetes.io/projected/721da70c-0049-498f-927c-dadcd0867152-kube-api-access-ghsxm\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.303628 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.327997 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.330132 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.394614 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data" (OuterVolumeSpecName: "config-data") pod "721da70c-0049-498f-927c-dadcd0867152" (UID: "721da70c-0049-498f-927c-dadcd0867152"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.410684 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.410720 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:07 crc kubenswrapper[4762]: I0308 01:36:07.410729 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/721da70c-0049-498f-927c-dadcd0867152-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.013392 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"721da70c-0049-498f-927c-dadcd0867152","Type":"ContainerDied","Data":"6603ef2c6933ac34bbb2b2c96575d8b11634ab0d9441dcebe513a55e83c76d07"} Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.013468 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.013696 4762 scope.go:117] "RemoveContainer" containerID="b7af471a865b12dd686483a2062192e5a997f6bd613475cd6ac179e46b3ffae2" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.017456 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"88cfd6da-0fda-4f8d-8611-d609aa5b3276","Type":"ContainerStarted","Data":"8731cf35ef83e74c5c790482044d3f92e9a60d63307a04a5c392db77095a934c"} Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.018414 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.037264 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.037228968 podStartE2EDuration="3.037228968s" podCreationTimestamp="2026-03-08 01:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:36:08.034363272 +0000 UTC m=+4389.508507616" watchObservedRunningTime="2026-03-08 01:36:08.037228968 +0000 UTC m=+4389.511373302" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.067501 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.083939 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094128 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:08 crc kubenswrapper[4762]: E0308 01:36:08.094615 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-central-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094632 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-central-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: E0308 01:36:08.094642 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="proxy-httpd" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094650 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="proxy-httpd" Mar 08 01:36:08 crc kubenswrapper[4762]: E0308 01:36:08.094704 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-notification-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094712 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-notification-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: E0308 01:36:08.094723 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="sg-core" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094729 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="sg-core" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094953 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="sg-core" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094968 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-notification-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094983 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="ceilometer-central-agent" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.094992 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="721da70c-0049-498f-927c-dadcd0867152" containerName="proxy-httpd" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.096907 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.110627 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.111417 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.111485 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.132289 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.227554 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58zw\" (UniqueName: \"kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.227979 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228128 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228317 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228452 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228557 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228700 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.228853 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.330830 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.330943 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.330986 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x58zw\" (UniqueName: \"kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331014 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331064 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331191 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331241 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331270 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331969 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.331995 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.338046 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.338554 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.338773 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.335889 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.351931 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.351967 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x58zw\" (UniqueName: \"kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw\") pod \"ceilometer-0\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " pod="openstack/ceilometer-0" Mar 08 01:36:08 crc kubenswrapper[4762]: I0308 01:36:08.430338 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:09 crc kubenswrapper[4762]: I0308 01:36:09.287635 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721da70c-0049-498f-927c-dadcd0867152" path="/var/lib/kubelet/pods/721da70c-0049-498f-927c-dadcd0867152/volumes" Mar 08 01:36:09 crc kubenswrapper[4762]: I0308 01:36:09.288853 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 08 01:36:09 crc kubenswrapper[4762]: I0308 01:36:09.734988 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c8d8d886c-75966" Mar 08 01:36:09 crc kubenswrapper[4762]: I0308 01:36:09.799700 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 01:36:09 crc kubenswrapper[4762]: I0308 01:36:09.799920 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="dnsmasq-dns" containerID="cri-o://425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8" gracePeriod=10 Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.192451 4762 scope.go:117] "RemoveContainer" containerID="240925bd14a1b5b2baa148f2e674073065cdc345e398514d38ceaaef3bcd9b5f" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.476596 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.497619 4762 scope.go:117] "RemoveContainer" containerID="0a1b14c854cb50902da15746f6462cf318ba092defa5ffd08d6746ef9847d470" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.502969 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.553335 4762 scope.go:117] "RemoveContainer" containerID="5a3b9598d8c3d9367c7ff01d198b875e2be8ed0ff2556b7f1a37464a26e12fe7" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.580226 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99ph5\" (UniqueName: \"kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5\") pod \"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4\" (UID: \"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.584697 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5" (OuterVolumeSpecName: "kube-api-access-99ph5") pod "b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" (UID: "b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4"). InnerVolumeSpecName "kube-api-access-99ph5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.683827 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.683973 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h89xf\" (UniqueName: \"kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684001 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684038 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684060 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684204 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684236 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0\") pod \"adfd1d4d-0990-4fc3-a48c-39efca58f753\" (UID: \"adfd1d4d-0990-4fc3-a48c-39efca58f753\") " Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.684711 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99ph5\" (UniqueName: \"kubernetes.io/projected/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4-kube-api-access-99ph5\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.689626 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf" (OuterVolumeSpecName: "kube-api-access-h89xf") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "kube-api-access-h89xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.758559 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.761514 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.767798 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.771424 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.777739 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.786190 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config" (OuterVolumeSpecName: "config") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.786766 4762 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-config\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.786860 4762 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.786919 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h89xf\" (UniqueName: \"kubernetes.io/projected/adfd1d4d-0990-4fc3-a48c-39efca58f753-kube-api-access-h89xf\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.786972 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.787025 4762 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.787083 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.797666 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adfd1d4d-0990-4fc3-a48c-39efca58f753" (UID: "adfd1d4d-0990-4fc3-a48c-39efca58f753"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:10 crc kubenswrapper[4762]: I0308 01:36:10.889003 4762 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adfd1d4d-0990-4fc3-a48c-39efca58f753-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.054673 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerStarted","Data":"049eebf3edcdaaf26afaebbb4f9ee7c797c9c9f2e68d25577848fa8acc727576"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.060935 4762 generic.go:334] "Generic (PLEG): container finished" podID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerID="425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8" exitCode=0 Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.061005 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" event={"ID":"adfd1d4d-0990-4fc3-a48c-39efca58f753","Type":"ContainerDied","Data":"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.061017 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.061032 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x" event={"ID":"adfd1d4d-0990-4fc3-a48c-39efca58f753","Type":"ContainerDied","Data":"f0fed0afb3a339cd8b5eb305f19d8e18a380e468850babf46d49acdc59f4e359"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.061049 4762 scope.go:117] "RemoveContainer" containerID="425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.071818 4762 generic.go:334] "Generic (PLEG): container finished" podID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerID="4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70" exitCode=0 Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.071902 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerDied","Data":"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.075297 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerStarted","Data":"4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.077895 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" event={"ID":"b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4","Type":"ContainerDied","Data":"91a4b5b4385bb5ee144c061a3a22b507c6e4cbc1d79cb481d00ab43524c3bd34"} Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.077931 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a4b5b4385bb5ee144c061a3a22b507c6e4cbc1d79cb481d00ab43524c3bd34" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.077942 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548896-d4xzj" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.106915 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.109861 4762 scope.go:117] "RemoveContainer" containerID="6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.120149 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cf7b6cbf7-qlc7x"] Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.143995 4762 scope.go:117] "RemoveContainer" containerID="425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8" Mar 08 01:36:11 crc kubenswrapper[4762]: E0308 01:36:11.144654 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8\": container with ID starting with 425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8 not found: ID does not exist" containerID="425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.144695 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8"} err="failed to get container status \"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8\": rpc error: code = NotFound desc = could not find container \"425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8\": container with ID starting with 425395d4785b1c07759e01f89237e39fd856a67f8311c92a854cf161b7d759c8 not found: ID does not exist" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.144720 4762 scope.go:117] "RemoveContainer" containerID="6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550" Mar 08 01:36:11 crc kubenswrapper[4762]: E0308 01:36:11.145035 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550\": container with ID starting with 6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550 not found: ID does not exist" containerID="6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.145056 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550"} err="failed to get container status \"6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550\": rpc error: code = NotFound desc = could not find container \"6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550\": container with ID starting with 6778b66953e7642989df42cfc4bc7389f0856f4259dddebb397ff67a40d4c550 not found: ID does not exist" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.269345 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fb85975b-kwm2b" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.104:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8443: connect: connection refused" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.276717 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" path="/var/lib/kubelet/pods/adfd1d4d-0990-4fc3-a48c-39efca58f753/volumes" Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.551643 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548890-dxpl4"] Mar 08 01:36:11 crc kubenswrapper[4762]: I0308 01:36:11.564212 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548890-dxpl4"] Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.090479 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerStarted","Data":"db95f725ab4aa0d8d73e1f617656187745423a9bdb93b58852563ddf1e9c5efe"} Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.094184 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerStarted","Data":"773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63"} Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.125010 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=4.63511314 podStartE2EDuration="14.124983937s" podCreationTimestamp="2026-03-08 01:35:58 +0000 UTC" firstStartedPulling="2026-03-08 01:36:00.823882016 +0000 UTC m=+4382.298026360" lastFinishedPulling="2026-03-08 01:36:10.313752813 +0000 UTC m=+4391.787897157" observedRunningTime="2026-03-08 01:36:12.11839549 +0000 UTC m=+4393.592539834" watchObservedRunningTime="2026-03-08 01:36:12.124983937 +0000 UTC m=+4393.599128291" Mar 08 01:36:12 crc kubenswrapper[4762]: W0308 01:36:12.428192 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af7df4a_f9ac_4a99_9e55_194eac2025f0.slice/crio-b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af7df4a_f9ac_4a99_9e55_194eac2025f0.slice/crio-b7433fe7b60f89d9b89377e8cf7078f7951ae9a1bd23ecc92f6a595ca6a5d115.scope: no such file or directory Mar 08 01:36:12 crc kubenswrapper[4762]: W0308 01:36:12.428273 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a1d047_99d0_443d_bcb9_bdd5fa3c88d4.slice/crio-conmon-f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a1d047_99d0_443d_bcb9_bdd5fa3c88d4.slice/crio-conmon-f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3.scope: no such file or directory Mar 08 01:36:12 crc kubenswrapper[4762]: W0308 01:36:12.428300 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a1d047_99d0_443d_bcb9_bdd5fa3c88d4.slice/crio-f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8a1d047_99d0_443d_bcb9_bdd5fa3c88d4.slice/crio-f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3.scope: no such file or directory Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.852876 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.853387 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:36:12 crc kubenswrapper[4762]: I0308 01:36:12.916048 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.002243 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.046522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts\") pod \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.046595 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs\") pod \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.046734 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key\") pod \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.046810 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xfcl\" (UniqueName: \"kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl\") pod \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.046895 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data\") pod \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\" (UID: \"1675bf2f-acd6-4383-98d7-7c5fd92d9095\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.047384 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs" (OuterVolumeSpecName: "logs") pod "1675bf2f-acd6-4383-98d7-7c5fd92d9095" (UID: "1675bf2f-acd6-4383-98d7-7c5fd92d9095"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.047965 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1675bf2f-acd6-4383-98d7-7c5fd92d9095-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.054896 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1675bf2f-acd6-4383-98d7-7c5fd92d9095" (UID: "1675bf2f-acd6-4383-98d7-7c5fd92d9095"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.057930 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl" (OuterVolumeSpecName: "kube-api-access-8xfcl") pod "1675bf2f-acd6-4383-98d7-7c5fd92d9095" (UID: "1675bf2f-acd6-4383-98d7-7c5fd92d9095"). InnerVolumeSpecName "kube-api-access-8xfcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.074608 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data" (OuterVolumeSpecName: "config-data") pod "1675bf2f-acd6-4383-98d7-7c5fd92d9095" (UID: "1675bf2f-acd6-4383-98d7-7c5fd92d9095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.088741 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts" (OuterVolumeSpecName: "scripts") pod "1675bf2f-acd6-4383-98d7-7c5fd92d9095" (UID: "1675bf2f-acd6-4383-98d7-7c5fd92d9095"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107154 4762 generic.go:334] "Generic (PLEG): container finished" podID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerID="4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" exitCode=137 Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107186 4762 generic.go:334] "Generic (PLEG): container finished" podID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerID="c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" exitCode=137 Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107250 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerDied","Data":"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107277 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerDied","Data":"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107286 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7c87b746d7-v5s72" event={"ID":"1675bf2f-acd6-4383-98d7-7c5fd92d9095","Type":"ContainerDied","Data":"853e0f68bcdd6724db6277d42e7be5ae7e674221ef10b4ed1ed2eef30533a522"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107300 4762 scope.go:117] "RemoveContainer" containerID="4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.107430 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7c87b746d7-v5s72" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.118815 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerStarted","Data":"3017cdc895299dc9952c566e5ab0820d14ea93f72546e96d0d94eee5465f6be5"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.122355 4762 generic.go:334] "Generic (PLEG): container finished" podID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerID="d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" exitCode=137 Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.122403 4762 generic.go:334] "Generic (PLEG): container finished" podID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerID="573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" exitCode=137 Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.123403 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerDied","Data":"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.123479 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerDied","Data":"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.123491 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d7d9f8bb9-rwxd8" event={"ID":"01c66c2c-537a-469a-958a-5edb6e67a8ab","Type":"ContainerDied","Data":"b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32"} Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.123433 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d7d9f8bb9-rwxd8" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.149086 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts\") pod \"01c66c2c-537a-469a-958a-5edb6e67a8ab\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.149236 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key\") pod \"01c66c2c-537a-469a-958a-5edb6e67a8ab\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.149303 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data\") pod \"01c66c2c-537a-469a-958a-5edb6e67a8ab\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.149482 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2q2r\" (UniqueName: \"kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r\") pod \"01c66c2c-537a-469a-958a-5edb6e67a8ab\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.149594 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs\") pod \"01c66c2c-537a-469a-958a-5edb6e67a8ab\" (UID: \"01c66c2c-537a-469a-958a-5edb6e67a8ab\") " Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.150067 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.150082 4762 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1675bf2f-acd6-4383-98d7-7c5fd92d9095-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.150094 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xfcl\" (UniqueName: \"kubernetes.io/projected/1675bf2f-acd6-4383-98d7-7c5fd92d9095-kube-api-access-8xfcl\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.150102 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1675bf2f-acd6-4383-98d7-7c5fd92d9095-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.150433 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs" (OuterVolumeSpecName: "logs") pod "01c66c2c-537a-469a-958a-5edb6e67a8ab" (UID: "01c66c2c-537a-469a-958a-5edb6e67a8ab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.155812 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "01c66c2c-537a-469a-958a-5edb6e67a8ab" (UID: "01c66c2c-537a-469a-958a-5edb6e67a8ab"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.155871 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.167614 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r" (OuterVolumeSpecName: "kube-api-access-d2q2r") pod "01c66c2c-537a-469a-958a-5edb6e67a8ab" (UID: "01c66c2c-537a-469a-958a-5edb6e67a8ab"). InnerVolumeSpecName "kube-api-access-d2q2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.171941 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7c87b746d7-v5s72"] Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.178637 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data" (OuterVolumeSpecName: "config-data") pod "01c66c2c-537a-469a-958a-5edb6e67a8ab" (UID: "01c66c2c-537a-469a-958a-5edb6e67a8ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.198379 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts" (OuterVolumeSpecName: "scripts") pod "01c66c2c-537a-469a-958a-5edb6e67a8ab" (UID: "01c66c2c-537a-469a-958a-5edb6e67a8ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.252705 4762 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/01c66c2c-537a-469a-958a-5edb6e67a8ab-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.252742 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.252754 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2q2r\" (UniqueName: \"kubernetes.io/projected/01c66c2c-537a-469a-958a-5edb6e67a8ab-kube-api-access-d2q2r\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.252767 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/01c66c2c-537a-469a-958a-5edb6e67a8ab-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.252778 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01c66c2c-537a-469a-958a-5edb6e67a8ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.274854 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" path="/var/lib/kubelet/pods/1675bf2f-acd6-4383-98d7-7c5fd92d9095/volumes" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.277016 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c43864c-181b-492a-84db-593b684686e9" path="/var/lib/kubelet/pods/9c43864c-181b-492a-84db-593b684686e9/volumes" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.350552 4762 scope.go:117] "RemoveContainer" containerID="c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.370153 4762 scope.go:117] "RemoveContainer" containerID="4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" Mar 08 01:36:13 crc kubenswrapper[4762]: E0308 01:36:13.370599 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349\": container with ID starting with 4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349 not found: ID does not exist" containerID="4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.370631 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349"} err="failed to get container status \"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349\": rpc error: code = NotFound desc = could not find container \"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349\": container with ID starting with 4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349 not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.370650 4762 scope.go:117] "RemoveContainer" containerID="c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" Mar 08 01:36:13 crc kubenswrapper[4762]: E0308 01:36:13.371182 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2\": container with ID starting with c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2 not found: ID does not exist" containerID="c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371202 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2"} err="failed to get container status \"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2\": rpc error: code = NotFound desc = could not find container \"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2\": container with ID starting with c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2 not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371215 4762 scope.go:117] "RemoveContainer" containerID="4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371400 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349"} err="failed to get container status \"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349\": rpc error: code = NotFound desc = could not find container \"4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349\": container with ID starting with 4daef0d30f49c27b40ac855aa4fa8c40b018c2149168be67020d81dc3ebb4349 not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371422 4762 scope.go:117] "RemoveContainer" containerID="c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371736 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2"} err="failed to get container status \"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2\": rpc error: code = NotFound desc = could not find container \"c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2\": container with ID starting with c2731ebc2442dec7701a479de8dcb55e4500601d131461a87fc9a56c07ee66e2 not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.371758 4762 scope.go:117] "RemoveContainer" containerID="d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.500122 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.514498 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d7d9f8bb9-rwxd8"] Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.544642 4762 scope.go:117] "RemoveContainer" containerID="573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.568622 4762 scope.go:117] "RemoveContainer" containerID="d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" Mar 08 01:36:13 crc kubenswrapper[4762]: E0308 01:36:13.569261 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f\": container with ID starting with d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f not found: ID does not exist" containerID="d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.569321 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f"} err="failed to get container status \"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f\": rpc error: code = NotFound desc = could not find container \"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f\": container with ID starting with d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.569357 4762 scope.go:117] "RemoveContainer" containerID="573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" Mar 08 01:36:13 crc kubenswrapper[4762]: E0308 01:36:13.570732 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789\": container with ID starting with 573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789 not found: ID does not exist" containerID="573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.570787 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789"} err="failed to get container status \"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789\": rpc error: code = NotFound desc = could not find container \"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789\": container with ID starting with 573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789 not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.570808 4762 scope.go:117] "RemoveContainer" containerID="d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.571208 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f"} err="failed to get container status \"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f\": rpc error: code = NotFound desc = could not find container \"d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f\": container with ID starting with d92bbd0f1c03cf6b64063d1ad9757a0d225653310a7e97a7f5f3a7611c387e5f not found: ID does not exist" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.571239 4762 scope.go:117] "RemoveContainer" containerID="573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789" Mar 08 01:36:13 crc kubenswrapper[4762]: I0308 01:36:13.571978 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789"} err="failed to get container status \"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789\": rpc error: code = NotFound desc = could not find container \"573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789\": container with ID starting with 573843c788f1d0aa04176eac15f8fd67bc3923567086f297c92939f9eb33a789 not found: ID does not exist" Mar 08 01:36:14 crc kubenswrapper[4762]: I0308 01:36:14.136722 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerStarted","Data":"108404d2070a36777d1d307e9c02329b81510ac14c759595b9eeeaa8a856f91a"} Mar 08 01:36:14 crc kubenswrapper[4762]: I0308 01:36:14.476445 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:15 crc kubenswrapper[4762]: I0308 01:36:15.280361 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" path="/var/lib/kubelet/pods/01c66c2c-537a-469a-958a-5edb6e67a8ab/volumes" Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.165978 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerStarted","Data":"5d54334980b72d2cb8f36ec3c90400145db6b1e05e001279285b8f611624bdc2"} Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.166463 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.166225 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-central-agent" containerID="cri-o://db95f725ab4aa0d8d73e1f617656187745423a9bdb93b58852563ddf1e9c5efe" gracePeriod=30 Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.166284 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="sg-core" containerID="cri-o://108404d2070a36777d1d307e9c02329b81510ac14c759595b9eeeaa8a856f91a" gracePeriod=30 Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.166309 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="proxy-httpd" containerID="cri-o://5d54334980b72d2cb8f36ec3c90400145db6b1e05e001279285b8f611624bdc2" gracePeriod=30 Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.166319 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-notification-agent" containerID="cri-o://3017cdc895299dc9952c566e5ab0820d14ea93f72546e96d0d94eee5465f6be5" gracePeriod=30 Mar 08 01:36:16 crc kubenswrapper[4762]: I0308 01:36:16.187519 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.577324524 podStartE2EDuration="8.187501202s" podCreationTimestamp="2026-03-08 01:36:08 +0000 UTC" firstStartedPulling="2026-03-08 01:36:10.772172939 +0000 UTC m=+4392.246317283" lastFinishedPulling="2026-03-08 01:36:15.382349576 +0000 UTC m=+4396.856493961" observedRunningTime="2026-03-08 01:36:16.185535943 +0000 UTC m=+4397.659680307" watchObservedRunningTime="2026-03-08 01:36:16.187501202 +0000 UTC m=+4397.661645546" Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.183416 4762 generic.go:334] "Generic (PLEG): container finished" podID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerID="5d54334980b72d2cb8f36ec3c90400145db6b1e05e001279285b8f611624bdc2" exitCode=0 Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.185145 4762 generic.go:334] "Generic (PLEG): container finished" podID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerID="108404d2070a36777d1d307e9c02329b81510ac14c759595b9eeeaa8a856f91a" exitCode=2 Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.185292 4762 generic.go:334] "Generic (PLEG): container finished" podID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerID="3017cdc895299dc9952c566e5ab0820d14ea93f72546e96d0d94eee5465f6be5" exitCode=0 Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.183516 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerDied","Data":"5d54334980b72d2cb8f36ec3c90400145db6b1e05e001279285b8f611624bdc2"} Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.185598 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerDied","Data":"108404d2070a36777d1d307e9c02329b81510ac14c759595b9eeeaa8a856f91a"} Mar 08 01:36:17 crc kubenswrapper[4762]: I0308 01:36:17.185770 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerDied","Data":"3017cdc895299dc9952c566e5ab0820d14ea93f72546e96d0d94eee5465f6be5"} Mar 08 01:36:19 crc kubenswrapper[4762]: I0308 01:36:19.352298 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.234082 4762 generic.go:334] "Generic (PLEG): container finished" podID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerID="db95f725ab4aa0d8d73e1f617656187745423a9bdb93b58852563ddf1e9c5efe" exitCode=0 Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.234327 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerDied","Data":"db95f725ab4aa0d8d73e1f617656187745423a9bdb93b58852563ddf1e9c5efe"} Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.373364 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.525945 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x58zw\" (UniqueName: \"kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526168 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526220 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526263 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526294 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526328 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526356 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526387 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle\") pod \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\" (UID: \"ecaaa6c9-acce-4c78-ae6f-bc8665386aed\") " Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.526924 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.527223 4762 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.527257 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.534561 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts" (OuterVolumeSpecName: "scripts") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.536556 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw" (OuterVolumeSpecName: "kube-api-access-x58zw") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "kube-api-access-x58zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.564939 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.627066 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.630250 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x58zw\" (UniqueName: \"kubernetes.io/projected/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-kube-api-access-x58zw\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.630301 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.630322 4762 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.630347 4762 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.630369 4762 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.655535 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.697634 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data" (OuterVolumeSpecName: "config-data") pod "ecaaa6c9-acce-4c78-ae6f-bc8665386aed" (UID: "ecaaa6c9-acce-4c78-ae6f-bc8665386aed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.733382 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:20 crc kubenswrapper[4762]: I0308 01:36:20.733422 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ecaaa6c9-acce-4c78-ae6f-bc8665386aed-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.010913 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.101190 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.250451 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.250514 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ecaaa6c9-acce-4c78-ae6f-bc8665386aed","Type":"ContainerDied","Data":"049eebf3edcdaaf26afaebbb4f9ee7c797c9c9f2e68d25577848fa8acc727576"} Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.250553 4762 scope.go:117] "RemoveContainer" containerID="5d54334980b72d2cb8f36ec3c90400145db6b1e05e001279285b8f611624bdc2" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.250621 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="manila-scheduler" containerID="cri-o://4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee" gracePeriod=30 Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.250951 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="probe" containerID="cri-o://e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a" gracePeriod=30 Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.267830 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fb85975b-kwm2b" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.104:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8443: connect: connection refused" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.290373 4762 scope.go:117] "RemoveContainer" containerID="108404d2070a36777d1d307e9c02329b81510ac14c759595b9eeeaa8a856f91a" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.309005 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.322545 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.334123 4762 scope.go:117] "RemoveContainer" containerID="3017cdc895299dc9952c566e5ab0820d14ea93f72546e96d0d94eee5465f6be5" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.343702 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344322 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344347 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344371 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="sg-core" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344380 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="sg-core" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344401 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="dnsmasq-dns" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344410 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="dnsmasq-dns" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344420 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344427 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344441 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344448 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344460 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344468 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344495 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="init" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344503 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="init" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344512 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="proxy-httpd" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.344519 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="proxy-httpd" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.344534 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-notification-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345510 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-notification-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.345537 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-central-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345547 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-central-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: E0308 01:36:21.345578 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" containerName="oc" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345590 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" containerName="oc" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345876 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-central-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345900 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345916 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" containerName="oc" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345931 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c66c2c-537a-469a-958a-5edb6e67a8ab" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345950 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="adfd1d4d-0990-4fc3-a48c-39efca58f753" containerName="dnsmasq-dns" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345966 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="ceilometer-notification-agent" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345979 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="proxy-httpd" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.345988 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.346005 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" containerName="sg-core" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.346016 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1675bf2f-acd6-4383-98d7-7c5fd92d9095" containerName="horizon-log" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.348737 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.354660 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.355375 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.355474 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.361329 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.398909 4762 scope.go:117] "RemoveContainer" containerID="db95f725ab4aa0d8d73e1f617656187745423a9bdb93b58852563ddf1e9c5efe" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.452326 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-run-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.452416 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9cs\" (UniqueName: \"kubernetes.io/projected/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-kube-api-access-nl9cs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453103 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453274 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-scripts\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453305 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-log-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453345 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453524 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-config-data\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.453635 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.555869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-scripts\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.555932 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-log-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.555978 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556020 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-config-data\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556049 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556083 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-run-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556114 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl9cs\" (UniqueName: \"kubernetes.io/projected/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-kube-api-access-nl9cs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556244 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.556590 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-run-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.558235 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-log-httpd\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.562639 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.563118 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.565412 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-scripts\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.568533 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-config-data\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.573041 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.587602 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl9cs\" (UniqueName: \"kubernetes.io/projected/3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14-kube-api-access-nl9cs\") pod \"ceilometer-0\" (UID: \"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14\") " pod="openstack/ceilometer-0" Mar 08 01:36:21 crc kubenswrapper[4762]: I0308 01:36:21.671656 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.158477 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.283110 4762 generic.go:334] "Generic (PLEG): container finished" podID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerID="e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a" exitCode=0 Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.283145 4762 generic.go:334] "Generic (PLEG): container finished" podID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerID="4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee" exitCode=0 Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.283216 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerDied","Data":"e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a"} Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.283242 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerDied","Data":"4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee"} Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.285388 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"96b946d0e86140129c5da79f292b0b018aa06dea6d7fc6dc36feaffc471dece1"} Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.521084 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576026 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576283 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576391 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576498 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576564 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr9rx\" (UniqueName: \"kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.576641 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts\") pod \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\" (UID: \"a20b89a6-90ce-49c8-87ca-96579ab0f0ae\") " Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.579905 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.588956 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx" (OuterVolumeSpecName: "kube-api-access-hr9rx") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "kube-api-access-hr9rx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.590815 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts" (OuterVolumeSpecName: "scripts") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.591788 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.680512 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.680546 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.680559 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr9rx\" (UniqueName: \"kubernetes.io/projected/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-kube-api-access-hr9rx\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.680570 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.720103 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.766305 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data" (OuterVolumeSpecName: "config-data") pod "a20b89a6-90ce-49c8-87ca-96579ab0f0ae" (UID: "a20b89a6-90ce-49c8-87ca-96579ab0f0ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.782411 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:22 crc kubenswrapper[4762]: I0308 01:36:22.782653 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a20b89a6-90ce-49c8-87ca-96579ab0f0ae-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.277705 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecaaa6c9-acce-4c78-ae6f-bc8665386aed" path="/var/lib/kubelet/pods/ecaaa6c9-acce-4c78-ae6f-bc8665386aed/volumes" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.299936 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a20b89a6-90ce-49c8-87ca-96579ab0f0ae","Type":"ContainerDied","Data":"652091f458d3fe25e9a8ccbd055db4bce322854b9591539a3ec315aad66ec054"} Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.300245 4762 scope.go:117] "RemoveContainer" containerID="e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.300043 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.303956 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"5a6d8fac6a6e64a0518c0702f5a28f4dae3953a5d0f5c4c20afa6d40f8071d06"} Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.365066 4762 scope.go:117] "RemoveContainer" containerID="4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.366493 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.390913 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.399326 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:23 crc kubenswrapper[4762]: E0308 01:36:23.399841 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="manila-scheduler" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.399858 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="manila-scheduler" Mar 08 01:36:23 crc kubenswrapper[4762]: E0308 01:36:23.399907 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="probe" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.399915 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="probe" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.400117 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="probe" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.400138 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" containerName="manila-scheduler" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.405322 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.407997 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.435251 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498717 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498829 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498855 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498880 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498938 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k94qm\" (UniqueName: \"kubernetes.io/projected/65897654-e519-4a6a-9557-2344198bc5cd-kube-api-access-k94qm\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.498990 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65897654-e519-4a6a-9557-2344198bc5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.601284 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.602740 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.603343 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.603474 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.603895 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k94qm\" (UniqueName: \"kubernetes.io/projected/65897654-e519-4a6a-9557-2344198bc5cd-kube-api-access-k94qm\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.604099 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65897654-e519-4a6a-9557-2344198bc5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.604255 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65897654-e519-4a6a-9557-2344198bc5cd-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.606065 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-scripts\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.612112 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.612507 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.617652 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65897654-e519-4a6a-9557-2344198bc5cd-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.624336 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k94qm\" (UniqueName: \"kubernetes.io/projected/65897654-e519-4a6a-9557-2344198bc5cd-kube-api-access-k94qm\") pod \"manila-scheduler-0\" (UID: \"65897654-e519-4a6a-9557-2344198bc5cd\") " pod="openstack/manila-scheduler-0" Mar 08 01:36:23 crc kubenswrapper[4762]: I0308 01:36:23.727971 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Mar 08 01:36:24 crc kubenswrapper[4762]: I0308 01:36:24.207470 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Mar 08 01:36:24 crc kubenswrapper[4762]: I0308 01:36:24.331137 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"f5d64f427f121d35c8e4885959d0a2c99f52a1aead54e339badc41329af0d8d5"} Mar 08 01:36:24 crc kubenswrapper[4762]: I0308 01:36:24.331200 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"00468f1ae42b52b8d302ac83bafdef16b4222cc7c608bb2ae817badf96a1aab5"} Mar 08 01:36:24 crc kubenswrapper[4762]: I0308 01:36:24.334609 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"65897654-e519-4a6a-9557-2344198bc5cd","Type":"ContainerStarted","Data":"967b00393fa70d63fdeaa807fab7086246f5f4f108e2f344f9310ada53b3eb89"} Mar 08 01:36:25 crc kubenswrapper[4762]: I0308 01:36:25.296186 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a20b89a6-90ce-49c8-87ca-96579ab0f0ae" path="/var/lib/kubelet/pods/a20b89a6-90ce-49c8-87ca-96579ab0f0ae/volumes" Mar 08 01:36:25 crc kubenswrapper[4762]: I0308 01:36:25.352353 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"65897654-e519-4a6a-9557-2344198bc5cd","Type":"ContainerStarted","Data":"50e50e6a703878ca69b5ff3ce24c1f75f5cc2172bff39089fd9ecb47e76fb9d9"} Mar 08 01:36:25 crc kubenswrapper[4762]: I0308 01:36:25.352416 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"65897654-e519-4a6a-9557-2344198bc5cd","Type":"ContainerStarted","Data":"1aeb1800d448d5480416b45b618a5e0dfd29767df74dc7e20f9b1a45bc95cae4"} Mar 08 01:36:25 crc kubenswrapper[4762]: I0308 01:36:25.383510 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.383492599 podStartE2EDuration="2.383492599s" podCreationTimestamp="2026-03-08 01:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:36:25.377740027 +0000 UTC m=+4406.851884391" watchObservedRunningTime="2026-03-08 01:36:25.383492599 +0000 UTC m=+4406.857636933" Mar 08 01:36:26 crc kubenswrapper[4762]: I0308 01:36:26.369222 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"091bb480f6e6dc946b98c7ef2aab3e6db6f8093a3eb31869f9a769ccb655519f"} Mar 08 01:36:26 crc kubenswrapper[4762]: I0308 01:36:26.369646 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 08 01:36:26 crc kubenswrapper[4762]: I0308 01:36:26.400856 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9519151209999999 podStartE2EDuration="5.400829653s" podCreationTimestamp="2026-03-08 01:36:21 +0000 UTC" firstStartedPulling="2026-03-08 01:36:22.171624469 +0000 UTC m=+4403.645768823" lastFinishedPulling="2026-03-08 01:36:25.620538971 +0000 UTC m=+4407.094683355" observedRunningTime="2026-03-08 01:36:26.39368019 +0000 UTC m=+4407.867824554" watchObservedRunningTime="2026-03-08 01:36:26.400829653 +0000 UTC m=+4407.874974027" Mar 08 01:36:27 crc kubenswrapper[4762]: I0308 01:36:27.530261 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Mar 08 01:36:30 crc kubenswrapper[4762]: I0308 01:36:30.780137 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 08 01:36:30 crc kubenswrapper[4762]: I0308 01:36:30.873475 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:31 crc kubenswrapper[4762]: I0308 01:36:31.268415 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-69fb85975b-kwm2b" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.104:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.104:8443: connect: connection refused" Mar 08 01:36:31 crc kubenswrapper[4762]: I0308 01:36:31.282722 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:36:31 crc kubenswrapper[4762]: I0308 01:36:31.502684 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="manila-share" containerID="cri-o://4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d" gracePeriod=30 Mar 08 01:36:31 crc kubenswrapper[4762]: I0308 01:36:31.502842 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="probe" containerID="cri-o://773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63" gracePeriod=30 Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518036 4762 generic.go:334] "Generic (PLEG): container finished" podID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerID="773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63" exitCode=0 Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518439 4762 generic.go:334] "Generic (PLEG): container finished" podID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerID="4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d" exitCode=1 Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518138 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerDied","Data":"773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63"} Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518478 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerDied","Data":"4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d"} Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518493 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e","Type":"ContainerDied","Data":"c04aff8312b052dd4775002b0a6370487e3ccc29ff8cdcea77dccb755006e1c5"} Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.518504 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c04aff8312b052dd4775002b0a6370487e3ccc29ff8cdcea77dccb755006e1c5" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.608539 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762097 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762179 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762338 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762372 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762470 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762522 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762551 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762578 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j885l\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l\") pod \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\" (UID: \"c40e35e4-766f-4ed8-9ae1-89e88d1fae9e\") " Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762862 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.762906 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.763626 4762 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-var-lib-manila\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.763645 4762 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.768134 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts" (OuterVolumeSpecName: "scripts") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.768645 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.770547 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph" (OuterVolumeSpecName: "ceph") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.773099 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l" (OuterVolumeSpecName: "kube-api-access-j885l") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "kube-api-access-j885l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.839229 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.866066 4762 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-ceph\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.866099 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j885l\" (UniqueName: \"kubernetes.io/projected/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-kube-api-access-j885l\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.866113 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.866120 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.866131 4762 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.881561 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data" (OuterVolumeSpecName: "config-data") pod "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" (UID: "c40e35e4-766f-4ed8-9ae1-89e88d1fae9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:32 crc kubenswrapper[4762]: I0308 01:36:32.967913 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.189802 4762 scope.go:117] "RemoveContainer" containerID="da1391907ccec4d1aa90ef666aca5c27e3ff8a63ce28182d7d9c4d5dbeaf350d" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.529805 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.568086 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.595494 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.613498 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:33 crc kubenswrapper[4762]: E0308 01:36:33.614285 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="manila-share" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.614315 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="manila-share" Mar 08 01:36:33 crc kubenswrapper[4762]: E0308 01:36:33.614367 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="probe" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.614380 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="probe" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.614838 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="probe" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.614900 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" containerName="manila-share" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.617032 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.627309 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.628264 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.689278 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.689511 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-ceph\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.689796 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.689867 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-scripts\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.689982 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.690032 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.690070 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7j2n\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-kube-api-access-z7j2n\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.690199 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.728237 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794415 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794481 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7j2n\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-kube-api-access-z7j2n\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794605 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794712 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-ceph\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794859 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794894 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-scripts\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.794963 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.796434 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.796636 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.801501 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.802774 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-scripts\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.804029 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-ceph\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.809618 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-config-data\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.810573 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.819929 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7j2n\" (UniqueName: \"kubernetes.io/projected/2a5c5599-66a7-46b2-8f10-2bfe3905d5fd-kube-api-access-z7j2n\") pod \"manila-share-share1-0\" (UID: \"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd\") " pod="openstack/manila-share-share1-0" Mar 08 01:36:33 crc kubenswrapper[4762]: I0308 01:36:33.945203 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Mar 08 01:36:34 crc kubenswrapper[4762]: I0308 01:36:34.569813 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Mar 08 01:36:35 crc kubenswrapper[4762]: I0308 01:36:35.291278 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c40e35e4-766f-4ed8-9ae1-89e88d1fae9e" path="/var/lib/kubelet/pods/c40e35e4-766f-4ed8-9ae1-89e88d1fae9e/volumes" Mar 08 01:36:35 crc kubenswrapper[4762]: I0308 01:36:35.581311 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd","Type":"ContainerStarted","Data":"4cc47b87fb37c59fc52c90dc77591a2a6cb35632fa2dcf9f76135fc409b3c5e3"} Mar 08 01:36:35 crc kubenswrapper[4762]: I0308 01:36:35.581594 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd","Type":"ContainerStarted","Data":"1b8a2068a1c453eaae4510fdabbec72a32cde47ca061a8495ec815be16231932"} Mar 08 01:36:36 crc kubenswrapper[4762]: I0308 01:36:36.609805 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd","Type":"ContainerStarted","Data":"967449ce16cc53ae0d97019677963cb45358d596e6f39ac5172ec8913bcd3b86"} Mar 08 01:36:36 crc kubenswrapper[4762]: I0308 01:36:36.635950 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.635933235 podStartE2EDuration="3.635933235s" podCreationTimestamp="2026-03-08 01:36:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:36:36.633057499 +0000 UTC m=+4418.107201853" watchObservedRunningTime="2026-03-08 01:36:36.635933235 +0000 UTC m=+4418.110077579" Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.030349 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af7df4a_f9ac_4a99_9e55_194eac2025f0.slice/crio-c7bc926a5180c8203cac35237b0bc16c9140bc70cab986435732338301ec8753 WatchSource:0}: Error finding container c7bc926a5180c8203cac35237b0bc16c9140bc70cab986435732338301ec8753: Status 404 returned error can't find the container with id c7bc926a5180c8203cac35237b0bc16c9140bc70cab986435732338301ec8753 Mar 08 01:36:37 crc kubenswrapper[4762]: E0308 01:36:37.031261 4762 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c66c2c_537a_469a_958a_5edb6e67a8ab.slice/crio-b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32: Error finding container b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32: Status 404 returned error can't find the container with id b80327951cb0bfc9576a6687aeabd9332b05b40dcd42781dc80a8ac6f5df0c32 Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.033945 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af7df4a_f9ac_4a99_9e55_194eac2025f0.slice/crio-151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e.scope WatchSource:0}: Error finding container 151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e: Status 404 returned error can't find the container with id 151018a108ea0ea752032b3450ad0669a33d26455371a46db950cb376e05671e Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.034722 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda20b89a6_90ce_49c8_87ca_96579ab0f0ae.slice/crio-4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee.scope WatchSource:0}: Error finding container 4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee: Status 404 returned error can't find the container with id 4a31932eb8d8892efe1773ba3cd19f998ee74167729d1c3f4d5170e5391c3bee Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.036978 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda20b89a6_90ce_49c8_87ca_96579ab0f0ae.slice/crio-e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a.scope WatchSource:0}: Error finding container e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a: Status 404 returned error can't find the container with id e4d0cd4e7ccd417d0679db2d011de41ae07e3188e9a8275100d3781164eb805a Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.037099 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecaaa6c9_acce_4c78_ae6f_bc8665386aed.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecaaa6c9_acce_4c78_ae6f_bc8665386aed.slice: no such file or directory Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.043330 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-conmon-4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-conmon-4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d.scope: no such file or directory Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.043391 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d.scope: no such file or directory Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.044156 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-conmon-773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-conmon-773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63.scope: no such file or directory Mar 08 01:36:37 crc kubenswrapper[4762]: W0308 01:36:37.044343 4762 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc40e35e4_766f_4ed8_9ae1_89e88d1fae9e.slice/crio-773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63.scope: no such file or directory Mar 08 01:36:37 crc kubenswrapper[4762]: E0308 01:36:37.363554 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a158b3f_a655_4ba9_87a9_b74c44bbc54a.slice/crio-conmon-813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a158b3f_a655_4ba9_87a9_b74c44bbc54a.slice/crio-813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2.scope\": RecentStats: unable to find data in memory cache]" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.476090 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589266 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589442 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589488 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589548 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589672 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7gft\" (UniqueName: \"kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589717 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.589840 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data\") pod \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\" (UID: \"9a158b3f-a655-4ba9-87a9-b74c44bbc54a\") " Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.591549 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs" (OuterVolumeSpecName: "logs") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.595727 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft" (OuterVolumeSpecName: "kube-api-access-h7gft") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "kube-api-access-h7gft". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.596048 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.628325 4762 generic.go:334] "Generic (PLEG): container finished" podID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerID="813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2" exitCode=137 Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.628373 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerDied","Data":"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2"} Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.629362 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-69fb85975b-kwm2b" event={"ID":"9a158b3f-a655-4ba9-87a9-b74c44bbc54a","Type":"ContainerDied","Data":"cdf57eda112d8c4243b32e85df38b1a188c0de6f78e8586d5a221a02d0119f33"} Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.629398 4762 scope.go:117] "RemoveContainer" containerID="4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.628435 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-69fb85975b-kwm2b" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.637249 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts" (OuterVolumeSpecName: "scripts") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.637386 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.640428 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data" (OuterVolumeSpecName: "config-data") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.678182 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "9a158b3f-a655-4ba9-87a9-b74c44bbc54a" (UID: "9a158b3f-a655-4ba9-87a9-b74c44bbc54a"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693478 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7gft\" (UniqueName: \"kubernetes.io/projected/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-kube-api-access-h7gft\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693504 4762 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-scripts\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693515 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693525 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693534 4762 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693544 4762 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.693552 4762 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9a158b3f-a655-4ba9-87a9-b74c44bbc54a-logs\") on node \"crc\" DevicePath \"\"" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.899563 4762 scope.go:117] "RemoveContainer" containerID="813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.924914 4762 scope.go:117] "RemoveContainer" containerID="4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70" Mar 08 01:36:37 crc kubenswrapper[4762]: E0308 01:36:37.925449 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70\": container with ID starting with 4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70 not found: ID does not exist" containerID="4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.925492 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70"} err="failed to get container status \"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70\": rpc error: code = NotFound desc = could not find container \"4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70\": container with ID starting with 4515f28acfb87831b4e06ced07dbf9dcb3ecced4ee00ce44f7989dddfbf32d70 not found: ID does not exist" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.925522 4762 scope.go:117] "RemoveContainer" containerID="813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2" Mar 08 01:36:37 crc kubenswrapper[4762]: E0308 01:36:37.926077 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2\": container with ID starting with 813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2 not found: ID does not exist" containerID="813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2" Mar 08 01:36:37 crc kubenswrapper[4762]: I0308 01:36:37.926149 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2"} err="failed to get container status \"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2\": rpc error: code = NotFound desc = could not find container \"813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2\": container with ID starting with 813234363fd65833a81c9d5a15e93d8e4bb3a2e33c41d9ffb94ec219a055faf2 not found: ID does not exist" Mar 08 01:36:38 crc kubenswrapper[4762]: I0308 01:36:38.001363 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:36:38 crc kubenswrapper[4762]: I0308 01:36:38.013254 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-69fb85975b-kwm2b"] Mar 08 01:36:39 crc kubenswrapper[4762]: I0308 01:36:39.289376 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" path="/var/lib/kubelet/pods/9a158b3f-a655-4ba9-87a9-b74c44bbc54a/volumes" Mar 08 01:36:42 crc kubenswrapper[4762]: I0308 01:36:42.852219 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:36:42 crc kubenswrapper[4762]: I0308 01:36:42.853080 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:36:43 crc kubenswrapper[4762]: I0308 01:36:43.945537 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 08 01:36:45 crc kubenswrapper[4762]: I0308 01:36:45.554750 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 08 01:36:51 crc kubenswrapper[4762]: I0308 01:36:51.695620 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 08 01:36:55 crc kubenswrapper[4762]: I0308 01:36:55.469145 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 08 01:37:12 crc kubenswrapper[4762]: I0308 01:37:12.852022 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:37:12 crc kubenswrapper[4762]: I0308 01:37:12.853649 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:37:12 crc kubenswrapper[4762]: I0308 01:37:12.853716 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:37:12 crc kubenswrapper[4762]: I0308 01:37:12.854681 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:37:12 crc kubenswrapper[4762]: I0308 01:37:12.854749 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5" gracePeriod=600 Mar 08 01:37:13 crc kubenswrapper[4762]: I0308 01:37:13.148689 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5" exitCode=0 Mar 08 01:37:13 crc kubenswrapper[4762]: I0308 01:37:13.148875 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5"} Mar 08 01:37:13 crc kubenswrapper[4762]: I0308 01:37:13.149216 4762 scope.go:117] "RemoveContainer" containerID="4e4816e57ea5d490b6622b5baf462103ead47cf00fc0ae9996c3a19a1fd49725" Mar 08 01:37:14 crc kubenswrapper[4762]: I0308 01:37:14.168005 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27"} Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.181021 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548898-zttxq"] Mar 08 01:38:00 crc kubenswrapper[4762]: E0308 01:38:00.182227 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon-log" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.182246 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon-log" Mar 08 01:38:00 crc kubenswrapper[4762]: E0308 01:38:00.182313 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.182322 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.182609 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon-log" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.182656 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a158b3f-a655-4ba9-87a9-b74c44bbc54a" containerName="horizon" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.183867 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.187209 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.187443 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.188808 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.196250 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548898-zttxq"] Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.355459 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24cg\" (UniqueName: \"kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg\") pod \"auto-csr-approver-29548898-zttxq\" (UID: \"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280\") " pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.459468 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c24cg\" (UniqueName: \"kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg\") pod \"auto-csr-approver-29548898-zttxq\" (UID: \"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280\") " pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.488896 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c24cg\" (UniqueName: \"kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg\") pod \"auto-csr-approver-29548898-zttxq\" (UID: \"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280\") " pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:00 crc kubenswrapper[4762]: I0308 01:38:00.521558 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:01 crc kubenswrapper[4762]: I0308 01:38:01.101570 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548898-zttxq"] Mar 08 01:38:02 crc kubenswrapper[4762]: I0308 01:38:02.134111 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548898-zttxq" event={"ID":"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280","Type":"ContainerStarted","Data":"9a40eb97aaacdbfd625cbbb73f7efd42632bc6e1c71ceeccc5e7debcdee8c3ba"} Mar 08 01:38:03 crc kubenswrapper[4762]: I0308 01:38:03.152032 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548898-zttxq" event={"ID":"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280","Type":"ContainerStarted","Data":"3f8fa448ff412c4927c3ffd01745be4d9c26f196115e0d18029c217ddb80fed4"} Mar 08 01:38:03 crc kubenswrapper[4762]: I0308 01:38:03.197158 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548898-zttxq" podStartSLOduration=2.311679972 podStartE2EDuration="3.197109916s" podCreationTimestamp="2026-03-08 01:38:00 +0000 UTC" firstStartedPulling="2026-03-08 01:38:01.097665342 +0000 UTC m=+4502.571809686" lastFinishedPulling="2026-03-08 01:38:01.983095246 +0000 UTC m=+4503.457239630" observedRunningTime="2026-03-08 01:38:03.179354035 +0000 UTC m=+4504.653498419" watchObservedRunningTime="2026-03-08 01:38:03.197109916 +0000 UTC m=+4504.671254310" Mar 08 01:38:04 crc kubenswrapper[4762]: I0308 01:38:04.173656 4762 generic.go:334] "Generic (PLEG): container finished" podID="6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" containerID="3f8fa448ff412c4927c3ffd01745be4d9c26f196115e0d18029c217ddb80fed4" exitCode=0 Mar 08 01:38:04 crc kubenswrapper[4762]: I0308 01:38:04.173814 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548898-zttxq" event={"ID":"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280","Type":"ContainerDied","Data":"3f8fa448ff412c4927c3ffd01745be4d9c26f196115e0d18029c217ddb80fed4"} Mar 08 01:38:05 crc kubenswrapper[4762]: I0308 01:38:05.836017 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:05 crc kubenswrapper[4762]: I0308 01:38:05.932991 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c24cg\" (UniqueName: \"kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg\") pod \"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280\" (UID: \"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280\") " Mar 08 01:38:05 crc kubenswrapper[4762]: I0308 01:38:05.947337 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg" (OuterVolumeSpecName: "kube-api-access-c24cg") pod "6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" (UID: "6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280"). InnerVolumeSpecName "kube-api-access-c24cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.037006 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c24cg\" (UniqueName: \"kubernetes.io/projected/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280-kube-api-access-c24cg\") on node \"crc\" DevicePath \"\"" Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.234949 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548898-zttxq" event={"ID":"6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280","Type":"ContainerDied","Data":"9a40eb97aaacdbfd625cbbb73f7efd42632bc6e1c71ceeccc5e7debcdee8c3ba"} Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.235022 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a40eb97aaacdbfd625cbbb73f7efd42632bc6e1c71ceeccc5e7debcdee8c3ba" Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.235141 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548898-zttxq" Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.268454 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548892-frtbn"] Mar 08 01:38:06 crc kubenswrapper[4762]: I0308 01:38:06.278661 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548892-frtbn"] Mar 08 01:38:07 crc kubenswrapper[4762]: I0308 01:38:07.283578 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c656452-593d-41b6-8781-1a7f1225ef8a" path="/var/lib/kubelet/pods/2c656452-593d-41b6-8781-1a7f1225ef8a/volumes" Mar 08 01:38:33 crc kubenswrapper[4762]: I0308 01:38:33.511420 4762 scope.go:117] "RemoveContainer" containerID="559e9358545cc3e78994f5faa2f21ebd5c95312725dce55bb4494deb612fd417" Mar 08 01:39:42 crc kubenswrapper[4762]: I0308 01:39:42.851159 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:39:42 crc kubenswrapper[4762]: I0308 01:39:42.851752 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.169464 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548900-s4m2l"] Mar 08 01:40:00 crc kubenswrapper[4762]: E0308 01:40:00.171008 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" containerName="oc" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.171035 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" containerName="oc" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.171409 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" containerName="oc" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.172803 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.177299 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.177611 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.184379 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.184619 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548900-s4m2l"] Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.375681 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mj8\" (UniqueName: \"kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8\") pod \"auto-csr-approver-29548900-s4m2l\" (UID: \"120024e7-48e7-448b-834d-c5dbe4108ecf\") " pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.477728 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6mj8\" (UniqueName: \"kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8\") pod \"auto-csr-approver-29548900-s4m2l\" (UID: \"120024e7-48e7-448b-834d-c5dbe4108ecf\") " pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.495566 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6mj8\" (UniqueName: \"kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8\") pod \"auto-csr-approver-29548900-s4m2l\" (UID: \"120024e7-48e7-448b-834d-c5dbe4108ecf\") " pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:00 crc kubenswrapper[4762]: I0308 01:40:00.509860 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:01 crc kubenswrapper[4762]: I0308 01:40:01.029281 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548900-s4m2l"] Mar 08 01:40:01 crc kubenswrapper[4762]: I0308 01:40:01.032420 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:40:01 crc kubenswrapper[4762]: I0308 01:40:01.734544 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" event={"ID":"120024e7-48e7-448b-834d-c5dbe4108ecf","Type":"ContainerStarted","Data":"fd0c99c428e3a390784b9c9106e84bfc9d4e3f93a9f00b99be5782627d8b07a4"} Mar 08 01:40:02 crc kubenswrapper[4762]: I0308 01:40:02.746246 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" event={"ID":"120024e7-48e7-448b-834d-c5dbe4108ecf","Type":"ContainerStarted","Data":"ab752eeb76d62f78fbcbc59744f908ed1458628727639aa8bd3b6faebd8429b2"} Mar 08 01:40:02 crc kubenswrapper[4762]: I0308 01:40:02.771861 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" podStartSLOduration=1.6431921250000001 podStartE2EDuration="2.771841991s" podCreationTimestamp="2026-03-08 01:40:00 +0000 UTC" firstStartedPulling="2026-03-08 01:40:01.032162367 +0000 UTC m=+4622.506306721" lastFinishedPulling="2026-03-08 01:40:02.160812233 +0000 UTC m=+4623.634956587" observedRunningTime="2026-03-08 01:40:02.762272269 +0000 UTC m=+4624.236416613" watchObservedRunningTime="2026-03-08 01:40:02.771841991 +0000 UTC m=+4624.245986345" Mar 08 01:40:03 crc kubenswrapper[4762]: I0308 01:40:03.756483 4762 generic.go:334] "Generic (PLEG): container finished" podID="120024e7-48e7-448b-834d-c5dbe4108ecf" containerID="ab752eeb76d62f78fbcbc59744f908ed1458628727639aa8bd3b6faebd8429b2" exitCode=0 Mar 08 01:40:03 crc kubenswrapper[4762]: I0308 01:40:03.756584 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" event={"ID":"120024e7-48e7-448b-834d-c5dbe4108ecf","Type":"ContainerDied","Data":"ab752eeb76d62f78fbcbc59744f908ed1458628727639aa8bd3b6faebd8429b2"} Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.228851 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.296772 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6mj8\" (UniqueName: \"kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8\") pod \"120024e7-48e7-448b-834d-c5dbe4108ecf\" (UID: \"120024e7-48e7-448b-834d-c5dbe4108ecf\") " Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.314974 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8" (OuterVolumeSpecName: "kube-api-access-z6mj8") pod "120024e7-48e7-448b-834d-c5dbe4108ecf" (UID: "120024e7-48e7-448b-834d-c5dbe4108ecf"). InnerVolumeSpecName "kube-api-access-z6mj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.399788 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6mj8\" (UniqueName: \"kubernetes.io/projected/120024e7-48e7-448b-834d-c5dbe4108ecf-kube-api-access-z6mj8\") on node \"crc\" DevicePath \"\"" Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.778540 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" event={"ID":"120024e7-48e7-448b-834d-c5dbe4108ecf","Type":"ContainerDied","Data":"fd0c99c428e3a390784b9c9106e84bfc9d4e3f93a9f00b99be5782627d8b07a4"} Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.778583 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd0c99c428e3a390784b9c9106e84bfc9d4e3f93a9f00b99be5782627d8b07a4" Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.778608 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548900-s4m2l" Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.855519 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548894-gbfkv"] Mar 08 01:40:05 crc kubenswrapper[4762]: I0308 01:40:05.888445 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548894-gbfkv"] Mar 08 01:40:07 crc kubenswrapper[4762]: I0308 01:40:07.277627 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19c6075f-11f7-41f5-a389-9695c2b37e66" path="/var/lib/kubelet/pods/19c6075f-11f7-41f5-a389-9695c2b37e66/volumes" Mar 08 01:40:12 crc kubenswrapper[4762]: I0308 01:40:12.851292 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:40:12 crc kubenswrapper[4762]: I0308 01:40:12.851849 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:40:33 crc kubenswrapper[4762]: I0308 01:40:33.921547 4762 scope.go:117] "RemoveContainer" containerID="b2f1dd7d8a2ffed021ad13e8bf2019ee2fe08c8b4c5a288f4c94d44826c8d8a7" Mar 08 01:40:42 crc kubenswrapper[4762]: I0308 01:40:42.852044 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:40:42 crc kubenswrapper[4762]: I0308 01:40:42.852608 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:40:42 crc kubenswrapper[4762]: I0308 01:40:42.852669 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:40:42 crc kubenswrapper[4762]: I0308 01:40:42.853523 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:40:42 crc kubenswrapper[4762]: I0308 01:40:42.853621 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" gracePeriod=600 Mar 08 01:40:42 crc kubenswrapper[4762]: E0308 01:40:42.978959 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:40:43 crc kubenswrapper[4762]: I0308 01:40:43.820124 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" exitCode=0 Mar 08 01:40:43 crc kubenswrapper[4762]: I0308 01:40:43.820185 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27"} Mar 08 01:40:43 crc kubenswrapper[4762]: I0308 01:40:43.820512 4762 scope.go:117] "RemoveContainer" containerID="cc3cf29ce2882f741ddb013e11a9e2102ae73e3bbe7a3960edaf9195694f3df5" Mar 08 01:40:43 crc kubenswrapper[4762]: I0308 01:40:43.821288 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:40:43 crc kubenswrapper[4762]: E0308 01:40:43.821732 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:40:59 crc kubenswrapper[4762]: I0308 01:40:59.278829 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:40:59 crc kubenswrapper[4762]: E0308 01:40:59.279863 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:41:14 crc kubenswrapper[4762]: I0308 01:41:14.264115 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:41:14 crc kubenswrapper[4762]: E0308 01:41:14.265137 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:41:26 crc kubenswrapper[4762]: I0308 01:41:26.263920 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:41:26 crc kubenswrapper[4762]: E0308 01:41:26.265033 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:41:39 crc kubenswrapper[4762]: I0308 01:41:39.274482 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:41:39 crc kubenswrapper[4762]: E0308 01:41:39.275860 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:41:53 crc kubenswrapper[4762]: I0308 01:41:53.263747 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:41:53 crc kubenswrapper[4762]: E0308 01:41:53.265602 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.169487 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548902-z6zrr"] Mar 08 01:42:00 crc kubenswrapper[4762]: E0308 01:42:00.170873 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120024e7-48e7-448b-834d-c5dbe4108ecf" containerName="oc" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.170897 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="120024e7-48e7-448b-834d-c5dbe4108ecf" containerName="oc" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.171268 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="120024e7-48e7-448b-834d-c5dbe4108ecf" containerName="oc" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.172140 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.175163 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.175945 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.184978 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548902-z6zrr"] Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.185824 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.226463 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8\") pod \"auto-csr-approver-29548902-z6zrr\" (UID: \"54e03012-03a5-4a4e-8af3-b2b7837b4d86\") " pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.329255 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8\") pod \"auto-csr-approver-29548902-z6zrr\" (UID: \"54e03012-03a5-4a4e-8af3-b2b7837b4d86\") " pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.359862 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8\") pod \"auto-csr-approver-29548902-z6zrr\" (UID: \"54e03012-03a5-4a4e-8af3-b2b7837b4d86\") " pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:00 crc kubenswrapper[4762]: I0308 01:42:00.497416 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:01 crc kubenswrapper[4762]: I0308 01:42:01.096432 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548902-z6zrr"] Mar 08 01:42:01 crc kubenswrapper[4762]: I0308 01:42:01.819463 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" event={"ID":"54e03012-03a5-4a4e-8af3-b2b7837b4d86","Type":"ContainerStarted","Data":"7c5cd0eb17f56312ca1505dcf461d158f998f0ebc3c3b1e140e2dfe05bb4283b"} Mar 08 01:42:02 crc kubenswrapper[4762]: I0308 01:42:02.836241 4762 generic.go:334] "Generic (PLEG): container finished" podID="54e03012-03a5-4a4e-8af3-b2b7837b4d86" containerID="cef6ea73703b41977adc1a1a199142e42a1a888d17b59986861ddb19824d78c9" exitCode=0 Mar 08 01:42:02 crc kubenswrapper[4762]: I0308 01:42:02.836332 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" event={"ID":"54e03012-03a5-4a4e-8af3-b2b7837b4d86","Type":"ContainerDied","Data":"cef6ea73703b41977adc1a1a199142e42a1a888d17b59986861ddb19824d78c9"} Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.292167 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.421532 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8\") pod \"54e03012-03a5-4a4e-8af3-b2b7837b4d86\" (UID: \"54e03012-03a5-4a4e-8af3-b2b7837b4d86\") " Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.427659 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8" (OuterVolumeSpecName: "kube-api-access-f22q8") pod "54e03012-03a5-4a4e-8af3-b2b7837b4d86" (UID: "54e03012-03a5-4a4e-8af3-b2b7837b4d86"). InnerVolumeSpecName "kube-api-access-f22q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.524543 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f22q8\" (UniqueName: \"kubernetes.io/projected/54e03012-03a5-4a4e-8af3-b2b7837b4d86-kube-api-access-f22q8\") on node \"crc\" DevicePath \"\"" Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.861932 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" event={"ID":"54e03012-03a5-4a4e-8af3-b2b7837b4d86","Type":"ContainerDied","Data":"7c5cd0eb17f56312ca1505dcf461d158f998f0ebc3c3b1e140e2dfe05bb4283b"} Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.862193 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c5cd0eb17f56312ca1505dcf461d158f998f0ebc3c3b1e140e2dfe05bb4283b" Mar 08 01:42:04 crc kubenswrapper[4762]: I0308 01:42:04.862031 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548902-z6zrr" Mar 08 01:42:05 crc kubenswrapper[4762]: I0308 01:42:05.378603 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548896-d4xzj"] Mar 08 01:42:05 crc kubenswrapper[4762]: I0308 01:42:05.389710 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548896-d4xzj"] Mar 08 01:42:06 crc kubenswrapper[4762]: I0308 01:42:06.263638 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:42:06 crc kubenswrapper[4762]: E0308 01:42:06.265075 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:42:07 crc kubenswrapper[4762]: I0308 01:42:07.292741 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4" path="/var/lib/kubelet/pods/b8a1d047-99d0-443d-bcb9-bdd5fa3c88d4/volumes" Mar 08 01:42:17 crc kubenswrapper[4762]: I0308 01:42:17.264044 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:42:17 crc kubenswrapper[4762]: E0308 01:42:17.264903 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:42:31 crc kubenswrapper[4762]: I0308 01:42:31.264345 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:42:31 crc kubenswrapper[4762]: E0308 01:42:31.265417 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:42:34 crc kubenswrapper[4762]: I0308 01:42:34.031557 4762 scope.go:117] "RemoveContainer" containerID="4db262951f799b47db82b64e291bddbb2735bd9415ba204458e0e054a0b7749d" Mar 08 01:42:34 crc kubenswrapper[4762]: I0308 01:42:34.065710 4762 scope.go:117] "RemoveContainer" containerID="773f1c64a1d3cf8c1827a1fe25934beed384b0629b66166c0dd5795889c1ab63" Mar 08 01:42:34 crc kubenswrapper[4762]: I0308 01:42:34.122003 4762 scope.go:117] "RemoveContainer" containerID="f89be2540ccb46cfb2e7fea4b18c097678fda0f9209de58c129784265de723a3" Mar 08 01:42:42 crc kubenswrapper[4762]: I0308 01:42:42.263254 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:42:42 crc kubenswrapper[4762]: E0308 01:42:42.264462 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.624744 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:42:43 crc kubenswrapper[4762]: E0308 01:42:43.625634 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54e03012-03a5-4a4e-8af3-b2b7837b4d86" containerName="oc" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.625652 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="54e03012-03a5-4a4e-8af3-b2b7837b4d86" containerName="oc" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.635712 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="54e03012-03a5-4a4e-8af3-b2b7837b4d86" containerName="oc" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.638882 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.650505 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.770913 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47qcd\" (UniqueName: \"kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.771162 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.771527 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.873594 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47qcd\" (UniqueName: \"kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.873700 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.873869 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.874360 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.874402 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.891114 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47qcd\" (UniqueName: \"kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd\") pod \"redhat-operators-php7q\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:43 crc kubenswrapper[4762]: I0308 01:42:43.970483 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:44 crc kubenswrapper[4762]: I0308 01:42:44.548730 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:42:45 crc kubenswrapper[4762]: I0308 01:42:45.412558 4762 generic.go:334] "Generic (PLEG): container finished" podID="04f68f64-25c3-4684-a8c0-e28efb082780" containerID="93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598" exitCode=0 Mar 08 01:42:45 crc kubenswrapper[4762]: I0308 01:42:45.412612 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerDied","Data":"93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598"} Mar 08 01:42:45 crc kubenswrapper[4762]: I0308 01:42:45.412843 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerStarted","Data":"e8183d055f673219610656fbe9c0c0e156b69538916539b73f1f79911964cefa"} Mar 08 01:42:47 crc kubenswrapper[4762]: I0308 01:42:47.444834 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerStarted","Data":"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1"} Mar 08 01:42:51 crc kubenswrapper[4762]: I0308 01:42:51.498010 4762 generic.go:334] "Generic (PLEG): container finished" podID="04f68f64-25c3-4684-a8c0-e28efb082780" containerID="512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1" exitCode=0 Mar 08 01:42:51 crc kubenswrapper[4762]: I0308 01:42:51.498164 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerDied","Data":"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1"} Mar 08 01:42:53 crc kubenswrapper[4762]: I0308 01:42:53.534700 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerStarted","Data":"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510"} Mar 08 01:42:53 crc kubenswrapper[4762]: I0308 01:42:53.566633 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-php7q" podStartSLOduration=4.037752515 podStartE2EDuration="10.566608145s" podCreationTimestamp="2026-03-08 01:42:43 +0000 UTC" firstStartedPulling="2026-03-08 01:42:45.415370605 +0000 UTC m=+4786.889514979" lastFinishedPulling="2026-03-08 01:42:51.944226225 +0000 UTC m=+4793.418370609" observedRunningTime="2026-03-08 01:42:53.557196968 +0000 UTC m=+4795.031341312" watchObservedRunningTime="2026-03-08 01:42:53.566608145 +0000 UTC m=+4795.040752489" Mar 08 01:42:53 crc kubenswrapper[4762]: I0308 01:42:53.971632 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:53 crc kubenswrapper[4762]: I0308 01:42:53.971711 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:42:55 crc kubenswrapper[4762]: I0308 01:42:55.063290 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-php7q" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" probeResult="failure" output=< Mar 08 01:42:55 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:42:55 crc kubenswrapper[4762]: > Mar 08 01:42:56 crc kubenswrapper[4762]: I0308 01:42:56.263873 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:42:56 crc kubenswrapper[4762]: E0308 01:42:56.264750 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:43:02 crc kubenswrapper[4762]: E0308 01:43:02.531740 4762 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.196:57946->38.102.83.196:38853: read tcp 38.102.83.196:57946->38.102.83.196:38853: read: connection reset by peer Mar 08 01:43:05 crc kubenswrapper[4762]: I0308 01:43:05.023584 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-php7q" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" probeResult="failure" output=< Mar 08 01:43:05 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:43:05 crc kubenswrapper[4762]: > Mar 08 01:43:07 crc kubenswrapper[4762]: I0308 01:43:07.263640 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:43:07 crc kubenswrapper[4762]: E0308 01:43:07.264373 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:43:14 crc kubenswrapper[4762]: I0308 01:43:14.031264 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:43:14 crc kubenswrapper[4762]: I0308 01:43:14.115042 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:43:14 crc kubenswrapper[4762]: I0308 01:43:14.821770 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:43:15 crc kubenswrapper[4762]: I0308 01:43:15.828547 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-php7q" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" containerID="cri-o://31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510" gracePeriod=2 Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.457510 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.517067 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47qcd\" (UniqueName: \"kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd\") pod \"04f68f64-25c3-4684-a8c0-e28efb082780\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.517404 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content\") pod \"04f68f64-25c3-4684-a8c0-e28efb082780\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.517589 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities\") pod \"04f68f64-25c3-4684-a8c0-e28efb082780\" (UID: \"04f68f64-25c3-4684-a8c0-e28efb082780\") " Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.526163 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities" (OuterVolumeSpecName: "utilities") pod "04f68f64-25c3-4684-a8c0-e28efb082780" (UID: "04f68f64-25c3-4684-a8c0-e28efb082780"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.530840 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd" (OuterVolumeSpecName: "kube-api-access-47qcd") pod "04f68f64-25c3-4684-a8c0-e28efb082780" (UID: "04f68f64-25c3-4684-a8c0-e28efb082780"). InnerVolumeSpecName "kube-api-access-47qcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.636045 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.636406 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47qcd\" (UniqueName: \"kubernetes.io/projected/04f68f64-25c3-4684-a8c0-e28efb082780-kube-api-access-47qcd\") on node \"crc\" DevicePath \"\"" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.661970 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "04f68f64-25c3-4684-a8c0-e28efb082780" (UID: "04f68f64-25c3-4684-a8c0-e28efb082780"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.740287 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/04f68f64-25c3-4684-a8c0-e28efb082780-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.852309 4762 generic.go:334] "Generic (PLEG): container finished" podID="04f68f64-25c3-4684-a8c0-e28efb082780" containerID="31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510" exitCode=0 Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.852372 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerDied","Data":"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510"} Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.852412 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-php7q" event={"ID":"04f68f64-25c3-4684-a8c0-e28efb082780","Type":"ContainerDied","Data":"e8183d055f673219610656fbe9c0c0e156b69538916539b73f1f79911964cefa"} Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.852448 4762 scope.go:117] "RemoveContainer" containerID="31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.852637 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-php7q" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.912850 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.925249 4762 scope.go:117] "RemoveContainer" containerID="512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1" Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.931887 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-php7q"] Mar 08 01:43:16 crc kubenswrapper[4762]: I0308 01:43:16.966895 4762 scope.go:117] "RemoveContainer" containerID="93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.011289 4762 scope.go:117] "RemoveContainer" containerID="31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510" Mar 08 01:43:17 crc kubenswrapper[4762]: E0308 01:43:17.012851 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510\": container with ID starting with 31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510 not found: ID does not exist" containerID="31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.013008 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510"} err="failed to get container status \"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510\": rpc error: code = NotFound desc = could not find container \"31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510\": container with ID starting with 31f0eeace27cf4dbfa1b0e030cfd95677af8e7b747da273a1be77753b89e8510 not found: ID does not exist" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.013112 4762 scope.go:117] "RemoveContainer" containerID="512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1" Mar 08 01:43:17 crc kubenswrapper[4762]: E0308 01:43:17.013789 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1\": container with ID starting with 512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1 not found: ID does not exist" containerID="512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.013859 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1"} err="failed to get container status \"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1\": rpc error: code = NotFound desc = could not find container \"512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1\": container with ID starting with 512e753c6cfa9b5bd759129a563a3f505b1fdd09284383fc7293e0dda504c7e1 not found: ID does not exist" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.013898 4762 scope.go:117] "RemoveContainer" containerID="93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598" Mar 08 01:43:17 crc kubenswrapper[4762]: E0308 01:43:17.014387 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598\": container with ID starting with 93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598 not found: ID does not exist" containerID="93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.014435 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598"} err="failed to get container status \"93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598\": rpc error: code = NotFound desc = could not find container \"93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598\": container with ID starting with 93152065aabd8f2ab41d1b464379ca97c8f63387905d96ce18f0f3c7df077598 not found: ID does not exist" Mar 08 01:43:17 crc kubenswrapper[4762]: I0308 01:43:17.291101 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" path="/var/lib/kubelet/pods/04f68f64-25c3-4684-a8c0-e28efb082780/volumes" Mar 08 01:43:21 crc kubenswrapper[4762]: I0308 01:43:21.264347 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:43:21 crc kubenswrapper[4762]: E0308 01:43:21.265224 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:43:33 crc kubenswrapper[4762]: I0308 01:43:33.264244 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:43:33 crc kubenswrapper[4762]: E0308 01:43:33.265936 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:43:46 crc kubenswrapper[4762]: I0308 01:43:46.264279 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:43:46 crc kubenswrapper[4762]: E0308 01:43:46.265517 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:43:57 crc kubenswrapper[4762]: I0308 01:43:57.263936 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:43:57 crc kubenswrapper[4762]: E0308 01:43:57.267172 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.190702 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548904-ft2tq"] Mar 08 01:44:00 crc kubenswrapper[4762]: E0308 01:44:00.192303 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="extract-utilities" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.192338 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="extract-utilities" Mar 08 01:44:00 crc kubenswrapper[4762]: E0308 01:44:00.192367 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="extract-content" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.192387 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="extract-content" Mar 08 01:44:00 crc kubenswrapper[4762]: E0308 01:44:00.192461 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.192479 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.192983 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f68f64-25c3-4684-a8c0-e28efb082780" containerName="registry-server" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.196642 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.204117 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.205308 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.205496 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.209742 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548904-ft2tq"] Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.391903 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwr9h\" (UniqueName: \"kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h\") pod \"auto-csr-approver-29548904-ft2tq\" (UID: \"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5\") " pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.495323 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwr9h\" (UniqueName: \"kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h\") pod \"auto-csr-approver-29548904-ft2tq\" (UID: \"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5\") " pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.526255 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwr9h\" (UniqueName: \"kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h\") pod \"auto-csr-approver-29548904-ft2tq\" (UID: \"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5\") " pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:00 crc kubenswrapper[4762]: I0308 01:44:00.551148 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:01 crc kubenswrapper[4762]: I0308 01:44:01.033419 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548904-ft2tq"] Mar 08 01:44:01 crc kubenswrapper[4762]: I0308 01:44:01.468073 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" event={"ID":"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5","Type":"ContainerStarted","Data":"4efa1ac577dca17380ca375392487a98a8789f85de2b235bf3655d84cda5c684"} Mar 08 01:44:02 crc kubenswrapper[4762]: I0308 01:44:02.481620 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" event={"ID":"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5","Type":"ContainerStarted","Data":"38670526d583e74f5a0c9b4b7ea0895571f2ba826249add13d8c525cb09910a9"} Mar 08 01:44:02 crc kubenswrapper[4762]: I0308 01:44:02.500062 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" podStartSLOduration=1.483321547 podStartE2EDuration="2.500044747s" podCreationTimestamp="2026-03-08 01:44:00 +0000 UTC" firstStartedPulling="2026-03-08 01:44:01.044850186 +0000 UTC m=+4862.518994540" lastFinishedPulling="2026-03-08 01:44:02.061573356 +0000 UTC m=+4863.535717740" observedRunningTime="2026-03-08 01:44:02.49388622 +0000 UTC m=+4863.968030564" watchObservedRunningTime="2026-03-08 01:44:02.500044747 +0000 UTC m=+4863.974189091" Mar 08 01:44:04 crc kubenswrapper[4762]: I0308 01:44:04.509410 4762 generic.go:334] "Generic (PLEG): container finished" podID="5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" containerID="38670526d583e74f5a0c9b4b7ea0895571f2ba826249add13d8c525cb09910a9" exitCode=0 Mar 08 01:44:04 crc kubenswrapper[4762]: I0308 01:44:04.509493 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" event={"ID":"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5","Type":"ContainerDied","Data":"38670526d583e74f5a0c9b4b7ea0895571f2ba826249add13d8c525cb09910a9"} Mar 08 01:44:06 crc kubenswrapper[4762]: I0308 01:44:06.543200 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" event={"ID":"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5","Type":"ContainerDied","Data":"4efa1ac577dca17380ca375392487a98a8789f85de2b235bf3655d84cda5c684"} Mar 08 01:44:06 crc kubenswrapper[4762]: I0308 01:44:06.544222 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4efa1ac577dca17380ca375392487a98a8789f85de2b235bf3655d84cda5c684" Mar 08 01:44:07 crc kubenswrapper[4762]: I0308 01:44:07.029588 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:07 crc kubenswrapper[4762]: I0308 01:44:07.109839 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwr9h\" (UniqueName: \"kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h\") pod \"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5\" (UID: \"5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5\") " Mar 08 01:44:07 crc kubenswrapper[4762]: I0308 01:44:07.114983 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h" (OuterVolumeSpecName: "kube-api-access-xwr9h") pod "5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" (UID: "5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5"). InnerVolumeSpecName "kube-api-access-xwr9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:44:07 crc kubenswrapper[4762]: I0308 01:44:07.212683 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwr9h\" (UniqueName: \"kubernetes.io/projected/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5-kube-api-access-xwr9h\") on node \"crc\" DevicePath \"\"" Mar 08 01:44:07 crc kubenswrapper[4762]: I0308 01:44:07.583344 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548904-ft2tq" Mar 08 01:44:08 crc kubenswrapper[4762]: I0308 01:44:08.123594 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548898-zttxq"] Mar 08 01:44:08 crc kubenswrapper[4762]: I0308 01:44:08.133184 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548898-zttxq"] Mar 08 01:44:09 crc kubenswrapper[4762]: I0308 01:44:09.296058 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280" path="/var/lib/kubelet/pods/6d0b4c3b-a47a-4a30-8c9f-1cb9aad33280/volumes" Mar 08 01:44:10 crc kubenswrapper[4762]: I0308 01:44:10.265134 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:44:10 crc kubenswrapper[4762]: E0308 01:44:10.265656 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:44:24 crc kubenswrapper[4762]: I0308 01:44:24.264274 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:44:24 crc kubenswrapper[4762]: E0308 01:44:24.265479 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:44:34 crc kubenswrapper[4762]: I0308 01:44:34.365130 4762 scope.go:117] "RemoveContainer" containerID="3f8fa448ff412c4927c3ffd01745be4d9c26f196115e0d18029c217ddb80fed4" Mar 08 01:44:35 crc kubenswrapper[4762]: I0308 01:44:35.264002 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:44:35 crc kubenswrapper[4762]: E0308 01:44:35.264949 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:44:46 crc kubenswrapper[4762]: I0308 01:44:46.264192 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:44:46 crc kubenswrapper[4762]: E0308 01:44:46.265139 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.162999 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4"] Mar 08 01:45:00 crc kubenswrapper[4762]: E0308 01:45:00.163905 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" containerName="oc" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.163917 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" containerName="oc" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.164141 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" containerName="oc" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.164805 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.176195 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4"] Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.200558 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.200627 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.303153 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.303391 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2cc5\" (UniqueName: \"kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.303530 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.405333 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2cc5\" (UniqueName: \"kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.405372 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.405441 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.406290 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.413103 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.424943 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2cc5\" (UniqueName: \"kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5\") pod \"collect-profiles-29548905-9rmf4\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:00 crc kubenswrapper[4762]: I0308 01:45:00.515333 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:01 crc kubenswrapper[4762]: I0308 01:45:01.094123 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4"] Mar 08 01:45:01 crc kubenswrapper[4762]: I0308 01:45:01.263779 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:45:01 crc kubenswrapper[4762]: E0308 01:45:01.265187 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:45:02 crc kubenswrapper[4762]: I0308 01:45:02.284476 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" event={"ID":"1c20004b-5b73-4e31-afb7-f9f3e44a690e","Type":"ContainerStarted","Data":"9d6412d5b5861abdba22408c4dd48ca59f7e6739120b4f67cd9427e7f275c32f"} Mar 08 01:45:02 crc kubenswrapper[4762]: I0308 01:45:02.285094 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" event={"ID":"1c20004b-5b73-4e31-afb7-f9f3e44a690e","Type":"ContainerStarted","Data":"6629f4ae0f47c1451e1728cf8d1132a20b96cf747814756528c74ad361fdcbfa"} Mar 08 01:45:02 crc kubenswrapper[4762]: I0308 01:45:02.325341 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" podStartSLOduration=2.325317017 podStartE2EDuration="2.325317017s" podCreationTimestamp="2026-03-08 01:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 01:45:02.300263306 +0000 UTC m=+4923.774407650" watchObservedRunningTime="2026-03-08 01:45:02.325317017 +0000 UTC m=+4923.799461381" Mar 08 01:45:03 crc kubenswrapper[4762]: I0308 01:45:03.302805 4762 generic.go:334] "Generic (PLEG): container finished" podID="1c20004b-5b73-4e31-afb7-f9f3e44a690e" containerID="9d6412d5b5861abdba22408c4dd48ca59f7e6739120b4f67cd9427e7f275c32f" exitCode=0 Mar 08 01:45:03 crc kubenswrapper[4762]: I0308 01:45:03.303826 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" event={"ID":"1c20004b-5b73-4e31-afb7-f9f3e44a690e","Type":"ContainerDied","Data":"9d6412d5b5861abdba22408c4dd48ca59f7e6739120b4f67cd9427e7f275c32f"} Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.757005 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.901793 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume\") pod \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.901924 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2cc5\" (UniqueName: \"kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5\") pod \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.901950 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume\") pod \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\" (UID: \"1c20004b-5b73-4e31-afb7-f9f3e44a690e\") " Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.902502 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c20004b-5b73-4e31-afb7-f9f3e44a690e" (UID: "1c20004b-5b73-4e31-afb7-f9f3e44a690e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.907708 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c20004b-5b73-4e31-afb7-f9f3e44a690e" (UID: "1c20004b-5b73-4e31-afb7-f9f3e44a690e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 01:45:04 crc kubenswrapper[4762]: I0308 01:45:04.910781 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5" (OuterVolumeSpecName: "kube-api-access-m2cc5") pod "1c20004b-5b73-4e31-afb7-f9f3e44a690e" (UID: "1c20004b-5b73-4e31-afb7-f9f3e44a690e"). InnerVolumeSpecName "kube-api-access-m2cc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.005288 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c20004b-5b73-4e31-afb7-f9f3e44a690e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.005331 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2cc5\" (UniqueName: \"kubernetes.io/projected/1c20004b-5b73-4e31-afb7-f9f3e44a690e-kube-api-access-m2cc5\") on node \"crc\" DevicePath \"\"" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.005342 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c20004b-5b73-4e31-afb7-f9f3e44a690e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.334129 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" event={"ID":"1c20004b-5b73-4e31-afb7-f9f3e44a690e","Type":"ContainerDied","Data":"6629f4ae0f47c1451e1728cf8d1132a20b96cf747814756528c74ad361fdcbfa"} Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.334583 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6629f4ae0f47c1451e1728cf8d1132a20b96cf747814756528c74ad361fdcbfa" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.334217 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548905-9rmf4" Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.410102 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn"] Mar 08 01:45:05 crc kubenswrapper[4762]: I0308 01:45:05.426725 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548860-qnbsn"] Mar 08 01:45:07 crc kubenswrapper[4762]: I0308 01:45:07.282277 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a0152d-ae49-4d71-bf70-87f040f34a1c" path="/var/lib/kubelet/pods/60a0152d-ae49-4d71-bf70-87f040f34a1c/volumes" Mar 08 01:45:13 crc kubenswrapper[4762]: I0308 01:45:13.263872 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:45:13 crc kubenswrapper[4762]: E0308 01:45:13.264895 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:45:27 crc kubenswrapper[4762]: I0308 01:45:27.263447 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:45:27 crc kubenswrapper[4762]: E0308 01:45:27.264575 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:45:33 crc kubenswrapper[4762]: I0308 01:45:33.091621 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-ldh47"] Mar 08 01:45:33 crc kubenswrapper[4762]: I0308 01:45:33.102825 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-ldh47"] Mar 08 01:45:33 crc kubenswrapper[4762]: I0308 01:45:33.293691 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bd8503-1bcc-4cb0-9928-19f698eca2fd" path="/var/lib/kubelet/pods/f2bd8503-1bcc-4cb0-9928-19f698eca2fd/volumes" Mar 08 01:45:34 crc kubenswrapper[4762]: I0308 01:45:34.046178 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-31d0-account-create-update-pldhf"] Mar 08 01:45:34 crc kubenswrapper[4762]: I0308 01:45:34.059055 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-31d0-account-create-update-pldhf"] Mar 08 01:45:34 crc kubenswrapper[4762]: I0308 01:45:34.526970 4762 scope.go:117] "RemoveContainer" containerID="651e64ffafe4b57fa615d3e794c411da3f0d9fd72c7fadc6f5f2d6e6c293950d" Mar 08 01:45:34 crc kubenswrapper[4762]: I0308 01:45:34.570942 4762 scope.go:117] "RemoveContainer" containerID="db89ef5eba2806a89875a4388efd02b7b520421647881137b78da1dd89f2020c" Mar 08 01:45:35 crc kubenswrapper[4762]: I0308 01:45:35.281622 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ab27811-9a59-4997-8546-0b1bf6668150" path="/var/lib/kubelet/pods/2ab27811-9a59-4997-8546-0b1bf6668150/volumes" Mar 08 01:45:41 crc kubenswrapper[4762]: I0308 01:45:41.264811 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:45:41 crc kubenswrapper[4762]: E0308 01:45:41.266081 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:45:54 crc kubenswrapper[4762]: I0308 01:45:54.263898 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:45:54 crc kubenswrapper[4762]: I0308 01:45:54.973286 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f"} Mar 08 01:45:58 crc kubenswrapper[4762]: I0308 01:45:58.069633 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-jm7mn"] Mar 08 01:45:58 crc kubenswrapper[4762]: I0308 01:45:58.080445 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-jm7mn"] Mar 08 01:45:59 crc kubenswrapper[4762]: I0308 01:45:59.287423 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f263492e-5989-410e-875a-3857b7821aeb" path="/var/lib/kubelet/pods/f263492e-5989-410e-875a-3857b7821aeb/volumes" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.182157 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548906-wk5v9"] Mar 08 01:46:00 crc kubenswrapper[4762]: E0308 01:46:00.182802 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c20004b-5b73-4e31-afb7-f9f3e44a690e" containerName="collect-profiles" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.182823 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c20004b-5b73-4e31-afb7-f9f3e44a690e" containerName="collect-profiles" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.183077 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c20004b-5b73-4e31-afb7-f9f3e44a690e" containerName="collect-profiles" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.184052 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.186438 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.186560 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.186849 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.195431 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548906-wk5v9"] Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.242699 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctthd\" (UniqueName: \"kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd\") pod \"auto-csr-approver-29548906-wk5v9\" (UID: \"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33\") " pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.344939 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctthd\" (UniqueName: \"kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd\") pod \"auto-csr-approver-29548906-wk5v9\" (UID: \"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33\") " pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.376841 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctthd\" (UniqueName: \"kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd\") pod \"auto-csr-approver-29548906-wk5v9\" (UID: \"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33\") " pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:00 crc kubenswrapper[4762]: I0308 01:46:00.513793 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:01 crc kubenswrapper[4762]: I0308 01:46:01.037623 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548906-wk5v9"] Mar 08 01:46:01 crc kubenswrapper[4762]: I0308 01:46:01.045151 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:46:02 crc kubenswrapper[4762]: I0308 01:46:02.058974 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" event={"ID":"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33","Type":"ContainerStarted","Data":"5b8c726941097483d6a7fe228b8f00eed7d3e6f0118f120b73d10304c948263e"} Mar 08 01:46:03 crc kubenswrapper[4762]: I0308 01:46:03.073863 4762 generic.go:334] "Generic (PLEG): container finished" podID="8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" containerID="c6d779292a39fe81f79f3722820e4037ff09e3fdbe12a541ce568c5a711970f3" exitCode=0 Mar 08 01:46:03 crc kubenswrapper[4762]: I0308 01:46:03.074004 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" event={"ID":"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33","Type":"ContainerDied","Data":"c6d779292a39fe81f79f3722820e4037ff09e3fdbe12a541ce568c5a711970f3"} Mar 08 01:46:04 crc kubenswrapper[4762]: I0308 01:46:04.610295 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:04 crc kubenswrapper[4762]: I0308 01:46:04.697388 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctthd\" (UniqueName: \"kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd\") pod \"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33\" (UID: \"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33\") " Mar 08 01:46:04 crc kubenswrapper[4762]: I0308 01:46:04.713540 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd" (OuterVolumeSpecName: "kube-api-access-ctthd") pod "8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" (UID: "8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33"). InnerVolumeSpecName "kube-api-access-ctthd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:46:04 crc kubenswrapper[4762]: I0308 01:46:04.799926 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctthd\" (UniqueName: \"kubernetes.io/projected/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33-kube-api-access-ctthd\") on node \"crc\" DevicePath \"\"" Mar 08 01:46:05 crc kubenswrapper[4762]: I0308 01:46:05.097658 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" event={"ID":"8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33","Type":"ContainerDied","Data":"5b8c726941097483d6a7fe228b8f00eed7d3e6f0118f120b73d10304c948263e"} Mar 08 01:46:05 crc kubenswrapper[4762]: I0308 01:46:05.097698 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8c726941097483d6a7fe228b8f00eed7d3e6f0118f120b73d10304c948263e" Mar 08 01:46:05 crc kubenswrapper[4762]: I0308 01:46:05.097725 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548906-wk5v9" Mar 08 01:46:05 crc kubenswrapper[4762]: I0308 01:46:05.685242 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548900-s4m2l"] Mar 08 01:46:05 crc kubenswrapper[4762]: I0308 01:46:05.697934 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548900-s4m2l"] Mar 08 01:46:07 crc kubenswrapper[4762]: I0308 01:46:07.285836 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120024e7-48e7-448b-834d-c5dbe4108ecf" path="/var/lib/kubelet/pods/120024e7-48e7-448b-834d-c5dbe4108ecf/volumes" Mar 08 01:46:34 crc kubenswrapper[4762]: I0308 01:46:34.769673 4762 scope.go:117] "RemoveContainer" containerID="ab752eeb76d62f78fbcbc59744f908ed1458628727639aa8bd3b6faebd8429b2" Mar 08 01:46:34 crc kubenswrapper[4762]: I0308 01:46:34.872548 4762 scope.go:117] "RemoveContainer" containerID="e69a94a3c8b780957ad9db67f7283275b306dc6ff6c897f36f1ac62d1417b4e7" Mar 08 01:46:34 crc kubenswrapper[4762]: I0308 01:46:34.915917 4762 scope.go:117] "RemoveContainer" containerID="e87c72a33d554c645866f865993767b63284c2c71b7e5d745beff904c56f386e" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.409995 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:32 crc kubenswrapper[4762]: E0308 01:47:32.411424 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" containerName="oc" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.411447 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" containerName="oc" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.411904 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" containerName="oc" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.415042 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.427192 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.490431 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrdn\" (UniqueName: \"kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.490724 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.491148 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.593513 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.593614 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrdn\" (UniqueName: \"kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.593690 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.594333 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.594651 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.628753 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrdn\" (UniqueName: \"kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn\") pod \"redhat-marketplace-nbmxx\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:32 crc kubenswrapper[4762]: I0308 01:47:32.756633 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:33 crc kubenswrapper[4762]: I0308 01:47:33.311437 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:33 crc kubenswrapper[4762]: I0308 01:47:33.792672 4762 generic.go:334] "Generic (PLEG): container finished" podID="75304ce2-5382-48a4-9667-4e73ee660238" containerID="1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316" exitCode=0 Mar 08 01:47:33 crc kubenswrapper[4762]: I0308 01:47:33.792732 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerDied","Data":"1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316"} Mar 08 01:47:33 crc kubenswrapper[4762]: I0308 01:47:33.792941 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerStarted","Data":"9621da3123d324deeba2ac966943d49f047c641e9358f00789db2248ce0a8684"} Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.191848 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.197531 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.216815 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.234229 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pk8n\" (UniqueName: \"kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.234746 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.234980 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.337156 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.337243 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.337287 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pk8n\" (UniqueName: \"kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.337867 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.338023 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.367351 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pk8n\" (UniqueName: \"kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n\") pod \"certified-operators-wj96f\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:34 crc kubenswrapper[4762]: I0308 01:47:34.528382 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.824798 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.842272 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.868097 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerStarted","Data":"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720"} Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.873860 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.966595 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49znz\" (UniqueName: \"kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.966636 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:34.966986 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.069097 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49znz\" (UniqueName: \"kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.069144 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.069211 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.069671 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.069894 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.099399 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49znz\" (UniqueName: \"kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz\") pod \"community-operators-9hwb5\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.197477 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:35 crc kubenswrapper[4762]: W0308 01:47:35.762946 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice/crio-a8e3a045779234a33ee4cf2b9deb27572c62e28efae0ede358af90fe672958e2 WatchSource:0}: Error finding container a8e3a045779234a33ee4cf2b9deb27572c62e28efae0ede358af90fe672958e2: Status 404 returned error can't find the container with id a8e3a045779234a33ee4cf2b9deb27572c62e28efae0ede358af90fe672958e2 Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.780668 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.791083 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.888136 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerStarted","Data":"7327c024d0359a2e1fa464d5e07525bd9ffbb750d718afab8b8cd4c882648b0d"} Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.889815 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerStarted","Data":"a8e3a045779234a33ee4cf2b9deb27572c62e28efae0ede358af90fe672958e2"} Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.892013 4762 generic.go:334] "Generic (PLEG): container finished" podID="75304ce2-5382-48a4-9667-4e73ee660238" containerID="ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720" exitCode=0 Mar 08 01:47:35 crc kubenswrapper[4762]: I0308 01:47:35.892087 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerDied","Data":"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720"} Mar 08 01:47:36 crc kubenswrapper[4762]: I0308 01:47:36.941021 4762 generic.go:334] "Generic (PLEG): container finished" podID="83422845-e200-4039-8315-bf1a268f9a72" containerID="1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae" exitCode=0 Mar 08 01:47:36 crc kubenswrapper[4762]: I0308 01:47:36.941565 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerDied","Data":"1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae"} Mar 08 01:47:36 crc kubenswrapper[4762]: I0308 01:47:36.944782 4762 generic.go:334] "Generic (PLEG): container finished" podID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerID="13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b" exitCode=0 Mar 08 01:47:36 crc kubenswrapper[4762]: I0308 01:47:36.944827 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerDied","Data":"13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b"} Mar 08 01:47:37 crc kubenswrapper[4762]: I0308 01:47:37.968409 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerStarted","Data":"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8"} Mar 08 01:47:37 crc kubenswrapper[4762]: I0308 01:47:37.977459 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerStarted","Data":"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e"} Mar 08 01:47:38 crc kubenswrapper[4762]: I0308 01:47:38.029562 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nbmxx" podStartSLOduration=3.525951579 podStartE2EDuration="6.029540834s" podCreationTimestamp="2026-03-08 01:47:32 +0000 UTC" firstStartedPulling="2026-03-08 01:47:33.795910302 +0000 UTC m=+5075.270054686" lastFinishedPulling="2026-03-08 01:47:36.299499577 +0000 UTC m=+5077.773643941" observedRunningTime="2026-03-08 01:47:38.015892109 +0000 UTC m=+5079.490036453" watchObservedRunningTime="2026-03-08 01:47:38.029540834 +0000 UTC m=+5079.503685178" Mar 08 01:47:38 crc kubenswrapper[4762]: I0308 01:47:38.989290 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerStarted","Data":"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee"} Mar 08 01:47:41 crc kubenswrapper[4762]: I0308 01:47:41.011207 4762 generic.go:334] "Generic (PLEG): container finished" podID="83422845-e200-4039-8315-bf1a268f9a72" containerID="c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee" exitCode=0 Mar 08 01:47:41 crc kubenswrapper[4762]: I0308 01:47:41.011386 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerDied","Data":"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee"} Mar 08 01:47:41 crc kubenswrapper[4762]: I0308 01:47:41.014308 4762 generic.go:334] "Generic (PLEG): container finished" podID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerID="4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8" exitCode=0 Mar 08 01:47:41 crc kubenswrapper[4762]: I0308 01:47:41.014338 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerDied","Data":"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8"} Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.028007 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerStarted","Data":"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f"} Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.031946 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerStarted","Data":"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9"} Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.064076 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wj96f" podStartSLOduration=3.6058197119999997 podStartE2EDuration="8.064052241s" podCreationTimestamp="2026-03-08 01:47:34 +0000 UTC" firstStartedPulling="2026-03-08 01:47:36.946495227 +0000 UTC m=+5078.420639571" lastFinishedPulling="2026-03-08 01:47:41.404727756 +0000 UTC m=+5082.878872100" observedRunningTime="2026-03-08 01:47:42.054463569 +0000 UTC m=+5083.528607923" watchObservedRunningTime="2026-03-08 01:47:42.064052241 +0000 UTC m=+5083.538196595" Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.089291 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9hwb5" podStartSLOduration=3.5658033749999998 podStartE2EDuration="8.089263067s" podCreationTimestamp="2026-03-08 01:47:34 +0000 UTC" firstStartedPulling="2026-03-08 01:47:36.945540138 +0000 UTC m=+5078.419684482" lastFinishedPulling="2026-03-08 01:47:41.46899982 +0000 UTC m=+5082.943144174" observedRunningTime="2026-03-08 01:47:42.075346374 +0000 UTC m=+5083.549490728" watchObservedRunningTime="2026-03-08 01:47:42.089263067 +0000 UTC m=+5083.563407451" Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.758219 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:42 crc kubenswrapper[4762]: I0308 01:47:42.758263 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:43 crc kubenswrapper[4762]: I0308 01:47:43.819115 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nbmxx" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="registry-server" probeResult="failure" output=< Mar 08 01:47:43 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:47:43 crc kubenswrapper[4762]: > Mar 08 01:47:44 crc kubenswrapper[4762]: I0308 01:47:44.529609 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:44 crc kubenswrapper[4762]: I0308 01:47:44.529662 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:45 crc kubenswrapper[4762]: I0308 01:47:45.197795 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:45 crc kubenswrapper[4762]: I0308 01:47:45.198172 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:45 crc kubenswrapper[4762]: I0308 01:47:45.580436 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-wj96f" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="registry-server" probeResult="failure" output=< Mar 08 01:47:45 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:47:45 crc kubenswrapper[4762]: > Mar 08 01:47:46 crc kubenswrapper[4762]: I0308 01:47:46.243600 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9hwb5" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="registry-server" probeResult="failure" output=< Mar 08 01:47:46 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:47:46 crc kubenswrapper[4762]: > Mar 08 01:47:52 crc kubenswrapper[4762]: I0308 01:47:52.814137 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:52 crc kubenswrapper[4762]: I0308 01:47:52.882947 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:53 crc kubenswrapper[4762]: I0308 01:47:53.067652 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.165748 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nbmxx" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="registry-server" containerID="cri-o://6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e" gracePeriod=2 Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.642637 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.701864 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.842809 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.901167 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities\") pod \"75304ce2-5382-48a4-9667-4e73ee660238\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.901293 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrdn\" (UniqueName: \"kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn\") pod \"75304ce2-5382-48a4-9667-4e73ee660238\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.901407 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content\") pod \"75304ce2-5382-48a4-9667-4e73ee660238\" (UID: \"75304ce2-5382-48a4-9667-4e73ee660238\") " Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.901709 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities" (OuterVolumeSpecName: "utilities") pod "75304ce2-5382-48a4-9667-4e73ee660238" (UID: "75304ce2-5382-48a4-9667-4e73ee660238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.902422 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.922391 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn" (OuterVolumeSpecName: "kube-api-access-9nrdn") pod "75304ce2-5382-48a4-9667-4e73ee660238" (UID: "75304ce2-5382-48a4-9667-4e73ee660238"). InnerVolumeSpecName "kube-api-access-9nrdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:47:54 crc kubenswrapper[4762]: I0308 01:47:54.927682 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "75304ce2-5382-48a4-9667-4e73ee660238" (UID: "75304ce2-5382-48a4-9667-4e73ee660238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.004337 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrdn\" (UniqueName: \"kubernetes.io/projected/75304ce2-5382-48a4-9667-4e73ee660238-kube-api-access-9nrdn\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.004366 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75304ce2-5382-48a4-9667-4e73ee660238-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.177379 4762 generic.go:334] "Generic (PLEG): container finished" podID="75304ce2-5382-48a4-9667-4e73ee660238" containerID="6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e" exitCode=0 Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.177428 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nbmxx" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.177464 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerDied","Data":"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e"} Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.177509 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nbmxx" event={"ID":"75304ce2-5382-48a4-9667-4e73ee660238","Type":"ContainerDied","Data":"9621da3123d324deeba2ac966943d49f047c641e9358f00789db2248ce0a8684"} Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.177527 4762 scope.go:117] "RemoveContainer" containerID="6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.216505 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.218417 4762 scope.go:117] "RemoveContainer" containerID="ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.226908 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nbmxx"] Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.242588 4762 scope.go:117] "RemoveContainer" containerID="1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.275597 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75304ce2-5382-48a4-9667-4e73ee660238" path="/var/lib/kubelet/pods/75304ce2-5382-48a4-9667-4e73ee660238/volumes" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.279173 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.312676 4762 scope.go:117] "RemoveContainer" containerID="6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e" Mar 08 01:47:55 crc kubenswrapper[4762]: E0308 01:47:55.315050 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e\": container with ID starting with 6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e not found: ID does not exist" containerID="6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.315123 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e"} err="failed to get container status \"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e\": rpc error: code = NotFound desc = could not find container \"6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e\": container with ID starting with 6736a6e6550f38acbbbe78feda82030c49124ef3c175b26a7bbf1617f585815e not found: ID does not exist" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.315170 4762 scope.go:117] "RemoveContainer" containerID="ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720" Mar 08 01:47:55 crc kubenswrapper[4762]: E0308 01:47:55.315777 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720\": container with ID starting with ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720 not found: ID does not exist" containerID="ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.315808 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720"} err="failed to get container status \"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720\": rpc error: code = NotFound desc = could not find container \"ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720\": container with ID starting with ebb4c845a8a95bc2fa82da8498c9499fac7f81bbf0c84d1e48d9b75bed38f720 not found: ID does not exist" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.315861 4762 scope.go:117] "RemoveContainer" containerID="1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316" Mar 08 01:47:55 crc kubenswrapper[4762]: E0308 01:47:55.316181 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316\": container with ID starting with 1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316 not found: ID does not exist" containerID="1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.316226 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316"} err="failed to get container status \"1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316\": rpc error: code = NotFound desc = could not find container \"1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316\": container with ID starting with 1c144b22efb02ee83005aa05f4105f8db981092ea1bf73be576d3f9c42630316 not found: ID does not exist" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.346437 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:55 crc kubenswrapper[4762]: I0308 01:47:55.856706 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.191620 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wj96f" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="registry-server" containerID="cri-o://841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f" gracePeriod=2 Mar 08 01:47:56 crc kubenswrapper[4762]: E0308 01:47:56.545890 4762 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:54712->38.102.83.196:38853: write tcp 38.102.83.196:54712->38.102.83.196:38853: write: broken pipe Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.833177 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.962560 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities\") pod \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.962940 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content\") pod \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.963111 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pk8n\" (UniqueName: \"kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n\") pod \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\" (UID: \"f714ab74-5c4b-4dd8-96b8-8b01120da79b\") " Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.963378 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities" (OuterVolumeSpecName: "utilities") pod "f714ab74-5c4b-4dd8-96b8-8b01120da79b" (UID: "f714ab74-5c4b-4dd8-96b8-8b01120da79b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.964206 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:56 crc kubenswrapper[4762]: I0308 01:47:56.972511 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n" (OuterVolumeSpecName: "kube-api-access-4pk8n") pod "f714ab74-5c4b-4dd8-96b8-8b01120da79b" (UID: "f714ab74-5c4b-4dd8-96b8-8b01120da79b"). InnerVolumeSpecName "kube-api-access-4pk8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.017112 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f714ab74-5c4b-4dd8-96b8-8b01120da79b" (UID: "f714ab74-5c4b-4dd8-96b8-8b01120da79b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.066317 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pk8n\" (UniqueName: \"kubernetes.io/projected/f714ab74-5c4b-4dd8-96b8-8b01120da79b-kube-api-access-4pk8n\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.066570 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f714ab74-5c4b-4dd8-96b8-8b01120da79b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.208559 4762 generic.go:334] "Generic (PLEG): container finished" podID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerID="841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f" exitCode=0 Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.208641 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerDied","Data":"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f"} Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.208688 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wj96f" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.208722 4762 scope.go:117] "RemoveContainer" containerID="841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.208697 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wj96f" event={"ID":"f714ab74-5c4b-4dd8-96b8-8b01120da79b","Type":"ContainerDied","Data":"7327c024d0359a2e1fa464d5e07525bd9ffbb750d718afab8b8cd4c882648b0d"} Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.262839 4762 scope.go:117] "RemoveContainer" containerID="4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.285599 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.285642 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wj96f"] Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.293468 4762 scope.go:117] "RemoveContainer" containerID="13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.383799 4762 scope.go:117] "RemoveContainer" containerID="841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f" Mar 08 01:47:57 crc kubenswrapper[4762]: E0308 01:47:57.386375 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f\": container with ID starting with 841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f not found: ID does not exist" containerID="841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.386730 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f"} err="failed to get container status \"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f\": rpc error: code = NotFound desc = could not find container \"841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f\": container with ID starting with 841ddee4f882d471dfdef1c268282a4d9d3bcbe6bfdd9c29b9046c27d80dbf8f not found: ID does not exist" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.386751 4762 scope.go:117] "RemoveContainer" containerID="4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8" Mar 08 01:47:57 crc kubenswrapper[4762]: E0308 01:47:57.387400 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8\": container with ID starting with 4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8 not found: ID does not exist" containerID="4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.387438 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8"} err="failed to get container status \"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8\": rpc error: code = NotFound desc = could not find container \"4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8\": container with ID starting with 4a5fbd0982f7ce3180f6ebd84bf52ef0fd0f38bf34c0e941c1a78540706314e8 not found: ID does not exist" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.387468 4762 scope.go:117] "RemoveContainer" containerID="13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b" Mar 08 01:47:57 crc kubenswrapper[4762]: E0308 01:47:57.387986 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b\": container with ID starting with 13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b not found: ID does not exist" containerID="13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b" Mar 08 01:47:57 crc kubenswrapper[4762]: I0308 01:47:57.388047 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b"} err="failed to get container status \"13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b\": rpc error: code = NotFound desc = could not find container \"13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b\": container with ID starting with 13f9f3c38301038506eab497af6bd512b7e998bb54c4c0357f0fadca62a43c2b not found: ID does not exist" Mar 08 01:47:58 crc kubenswrapper[4762]: I0308 01:47:58.251708 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:58 crc kubenswrapper[4762]: I0308 01:47:58.252520 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9hwb5" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="registry-server" containerID="cri-o://fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9" gracePeriod=2 Mar 08 01:47:58 crc kubenswrapper[4762]: I0308 01:47:58.864740 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.011615 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49znz\" (UniqueName: \"kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz\") pod \"83422845-e200-4039-8315-bf1a268f9a72\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.011817 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities\") pod \"83422845-e200-4039-8315-bf1a268f9a72\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.012056 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content\") pod \"83422845-e200-4039-8315-bf1a268f9a72\" (UID: \"83422845-e200-4039-8315-bf1a268f9a72\") " Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.013546 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities" (OuterVolumeSpecName: "utilities") pod "83422845-e200-4039-8315-bf1a268f9a72" (UID: "83422845-e200-4039-8315-bf1a268f9a72"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.017476 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz" (OuterVolumeSpecName: "kube-api-access-49znz") pod "83422845-e200-4039-8315-bf1a268f9a72" (UID: "83422845-e200-4039-8315-bf1a268f9a72"). InnerVolumeSpecName "kube-api-access-49znz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.057832 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83422845-e200-4039-8315-bf1a268f9a72" (UID: "83422845-e200-4039-8315-bf1a268f9a72"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.115361 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.115401 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83422845-e200-4039-8315-bf1a268f9a72-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.115446 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49znz\" (UniqueName: \"kubernetes.io/projected/83422845-e200-4039-8315-bf1a268f9a72-kube-api-access-49znz\") on node \"crc\" DevicePath \"\"" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.240313 4762 generic.go:334] "Generic (PLEG): container finished" podID="83422845-e200-4039-8315-bf1a268f9a72" containerID="fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9" exitCode=0 Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.240381 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9hwb5" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.240379 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerDied","Data":"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9"} Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.240445 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9hwb5" event={"ID":"83422845-e200-4039-8315-bf1a268f9a72","Type":"ContainerDied","Data":"a8e3a045779234a33ee4cf2b9deb27572c62e28efae0ede358af90fe672958e2"} Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.240467 4762 scope.go:117] "RemoveContainer" containerID="fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.266582 4762 scope.go:117] "RemoveContainer" containerID="c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.290948 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" path="/var/lib/kubelet/pods/f714ab74-5c4b-4dd8-96b8-8b01120da79b/volumes" Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.291943 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.297503 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9hwb5"] Mar 08 01:47:59 crc kubenswrapper[4762]: I0308 01:47:59.311283 4762 scope.go:117] "RemoveContainer" containerID="1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.075281 4762 scope.go:117] "RemoveContainer" containerID="fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.075987 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9\": container with ID starting with fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9 not found: ID does not exist" containerID="fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.076028 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9"} err="failed to get container status \"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9\": rpc error: code = NotFound desc = could not find container \"fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9\": container with ID starting with fd69e6e3679afc21092f884c901bf6123bad5deba89ca09c5d1587fdf62577d9 not found: ID does not exist" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.076057 4762 scope.go:117] "RemoveContainer" containerID="c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.076294 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee\": container with ID starting with c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee not found: ID does not exist" containerID="c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.076324 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee"} err="failed to get container status \"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee\": rpc error: code = NotFound desc = could not find container \"c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee\": container with ID starting with c4de6a2993a87ff5acd22224d3ba06454193cc8bc81123e57b3615ae0aaefaee not found: ID does not exist" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.076340 4762 scope.go:117] "RemoveContainer" containerID="1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.076734 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae\": container with ID starting with 1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae not found: ID does not exist" containerID="1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.076788 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae"} err="failed to get container status \"1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae\": rpc error: code = NotFound desc = could not find container \"1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae\": container with ID starting with 1b6e0a4e1f701d1f50bac1f476a54856bf17d1ec74bfcb9c506596ddeded85ae not found: ID does not exist" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.160550 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548908-s7sbm"] Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161068 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161086 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161106 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161112 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161129 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161148 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161161 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161168 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161191 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161197 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161212 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161218 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161228 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161236 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161256 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161262 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="extract-utilities" Mar 08 01:48:00 crc kubenswrapper[4762]: E0308 01:48:00.161275 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161281 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="extract-content" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161486 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="83422845-e200-4039-8315-bf1a268f9a72" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161508 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="f714ab74-5c4b-4dd8-96b8-8b01120da79b" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.161527 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="75304ce2-5382-48a4-9667-4e73ee660238" containerName="registry-server" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.162358 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.164706 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.166816 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.166816 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.178556 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548908-s7sbm"] Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.243941 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9klh\" (UniqueName: \"kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh\") pod \"auto-csr-approver-29548908-s7sbm\" (UID: \"b6ac93f4-5941-4071-9e63-8422db34deeb\") " pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.346499 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9klh\" (UniqueName: \"kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh\") pod \"auto-csr-approver-29548908-s7sbm\" (UID: \"b6ac93f4-5941-4071-9e63-8422db34deeb\") " pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.372780 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9klh\" (UniqueName: \"kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh\") pod \"auto-csr-approver-29548908-s7sbm\" (UID: \"b6ac93f4-5941-4071-9e63-8422db34deeb\") " pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:00 crc kubenswrapper[4762]: I0308 01:48:00.481346 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:01 crc kubenswrapper[4762]: W0308 01:48:01.012302 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6ac93f4_5941_4071_9e63_8422db34deeb.slice/crio-53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926 WatchSource:0}: Error finding container 53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926: Status 404 returned error can't find the container with id 53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926 Mar 08 01:48:01 crc kubenswrapper[4762]: I0308 01:48:01.015364 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548908-s7sbm"] Mar 08 01:48:01 crc kubenswrapper[4762]: I0308 01:48:01.259872 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" event={"ID":"b6ac93f4-5941-4071-9e63-8422db34deeb","Type":"ContainerStarted","Data":"53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926"} Mar 08 01:48:01 crc kubenswrapper[4762]: I0308 01:48:01.273665 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83422845-e200-4039-8315-bf1a268f9a72" path="/var/lib/kubelet/pods/83422845-e200-4039-8315-bf1a268f9a72/volumes" Mar 08 01:48:04 crc kubenswrapper[4762]: I0308 01:48:04.313108 4762 generic.go:334] "Generic (PLEG): container finished" podID="b6ac93f4-5941-4071-9e63-8422db34deeb" containerID="7e635412e5066a1e546bca548b7dfd3ebab749dd42262b8553cc92b89b9d6bec" exitCode=0 Mar 08 01:48:04 crc kubenswrapper[4762]: I0308 01:48:04.313148 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" event={"ID":"b6ac93f4-5941-4071-9e63-8422db34deeb","Type":"ContainerDied","Data":"7e635412e5066a1e546bca548b7dfd3ebab749dd42262b8553cc92b89b9d6bec"} Mar 08 01:48:05 crc kubenswrapper[4762]: I0308 01:48:05.817193 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:05 crc kubenswrapper[4762]: I0308 01:48:05.887248 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9klh\" (UniqueName: \"kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh\") pod \"b6ac93f4-5941-4071-9e63-8422db34deeb\" (UID: \"b6ac93f4-5941-4071-9e63-8422db34deeb\") " Mar 08 01:48:05 crc kubenswrapper[4762]: I0308 01:48:05.897105 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh" (OuterVolumeSpecName: "kube-api-access-t9klh") pod "b6ac93f4-5941-4071-9e63-8422db34deeb" (UID: "b6ac93f4-5941-4071-9e63-8422db34deeb"). InnerVolumeSpecName "kube-api-access-t9klh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:48:05 crc kubenswrapper[4762]: I0308 01:48:05.989464 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9klh\" (UniqueName: \"kubernetes.io/projected/b6ac93f4-5941-4071-9e63-8422db34deeb-kube-api-access-t9klh\") on node \"crc\" DevicePath \"\"" Mar 08 01:48:06 crc kubenswrapper[4762]: I0308 01:48:06.340922 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" event={"ID":"b6ac93f4-5941-4071-9e63-8422db34deeb","Type":"ContainerDied","Data":"53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926"} Mar 08 01:48:06 crc kubenswrapper[4762]: I0308 01:48:06.340959 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53d8fb3c039ddc4023ed74a1456d9e806f2de874009b6550e7ede04f803f1926" Mar 08 01:48:06 crc kubenswrapper[4762]: I0308 01:48:06.340998 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548908-s7sbm" Mar 08 01:48:06 crc kubenswrapper[4762]: I0308 01:48:06.912667 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548902-z6zrr"] Mar 08 01:48:06 crc kubenswrapper[4762]: I0308 01:48:06.929899 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548902-z6zrr"] Mar 08 01:48:07 crc kubenswrapper[4762]: I0308 01:48:07.286918 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54e03012-03a5-4a4e-8af3-b2b7837b4d86" path="/var/lib/kubelet/pods/54e03012-03a5-4a4e-8af3-b2b7837b4d86/volumes" Mar 08 01:48:08 crc kubenswrapper[4762]: E0308 01:48:08.246804 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:48:12 crc kubenswrapper[4762]: I0308 01:48:12.852371 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:48:12 crc kubenswrapper[4762]: I0308 01:48:12.853082 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:48:18 crc kubenswrapper[4762]: E0308 01:48:18.553495 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:48:28 crc kubenswrapper[4762]: E0308 01:48:28.866288 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:48:35 crc kubenswrapper[4762]: I0308 01:48:35.073121 4762 scope.go:117] "RemoveContainer" containerID="cef6ea73703b41977adc1a1a199142e42a1a888d17b59986861ddb19824d78c9" Mar 08 01:48:39 crc kubenswrapper[4762]: E0308 01:48:39.198470 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:48:42 crc kubenswrapper[4762]: I0308 01:48:42.851627 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:48:42 crc kubenswrapper[4762]: I0308 01:48:42.852514 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:48:49 crc kubenswrapper[4762]: E0308 01:48:49.534992 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83422845_e200_4039_8315_bf1a268f9a72.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:49:12 crc kubenswrapper[4762]: I0308 01:49:12.851228 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:49:12 crc kubenswrapper[4762]: I0308 01:49:12.851917 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:49:12 crc kubenswrapper[4762]: I0308 01:49:12.851975 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:49:12 crc kubenswrapper[4762]: I0308 01:49:12.852848 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:49:12 crc kubenswrapper[4762]: I0308 01:49:12.852915 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f" gracePeriod=600 Mar 08 01:49:13 crc kubenswrapper[4762]: I0308 01:49:13.259680 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f" exitCode=0 Mar 08 01:49:13 crc kubenswrapper[4762]: I0308 01:49:13.259775 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f"} Mar 08 01:49:13 crc kubenswrapper[4762]: I0308 01:49:13.260170 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70"} Mar 08 01:49:13 crc kubenswrapper[4762]: I0308 01:49:13.260198 4762 scope.go:117] "RemoveContainer" containerID="ac703e2fd386aaf4035ca5c3a20db9151939cf71a6bf47f6ac6dc4dc3fce6e27" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.154631 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548910-x9425"] Mar 08 01:50:00 crc kubenswrapper[4762]: E0308 01:50:00.155674 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6ac93f4-5941-4071-9e63-8422db34deeb" containerName="oc" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.155692 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6ac93f4-5941-4071-9e63-8422db34deeb" containerName="oc" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.156830 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6ac93f4-5941-4071-9e63-8422db34deeb" containerName="oc" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.157678 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.159908 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.160276 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.160448 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.176982 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548910-x9425"] Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.212266 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxx5\" (UniqueName: \"kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5\") pod \"auto-csr-approver-29548910-x9425\" (UID: \"20ca3f54-c4fa-4b6b-857c-7e988bc9d704\") " pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.315579 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vxx5\" (UniqueName: \"kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5\") pod \"auto-csr-approver-29548910-x9425\" (UID: \"20ca3f54-c4fa-4b6b-857c-7e988bc9d704\") " pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.338370 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vxx5\" (UniqueName: \"kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5\") pod \"auto-csr-approver-29548910-x9425\" (UID: \"20ca3f54-c4fa-4b6b-857c-7e988bc9d704\") " pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:00 crc kubenswrapper[4762]: I0308 01:50:00.489010 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:01 crc kubenswrapper[4762]: I0308 01:50:01.045583 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548910-x9425"] Mar 08 01:50:01 crc kubenswrapper[4762]: I0308 01:50:01.934980 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548910-x9425" event={"ID":"20ca3f54-c4fa-4b6b-857c-7e988bc9d704","Type":"ContainerStarted","Data":"70eace025802f8d86acbac1d7bc65334b3c65e210622e12a87325b49234e00bc"} Mar 08 01:50:02 crc kubenswrapper[4762]: I0308 01:50:02.944291 4762 generic.go:334] "Generic (PLEG): container finished" podID="20ca3f54-c4fa-4b6b-857c-7e988bc9d704" containerID="87e2c90b76c423b182abf87b0673890f1075c68adb30aea732b86efd6024d54a" exitCode=0 Mar 08 01:50:02 crc kubenswrapper[4762]: I0308 01:50:02.944359 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548910-x9425" event={"ID":"20ca3f54-c4fa-4b6b-857c-7e988bc9d704","Type":"ContainerDied","Data":"87e2c90b76c423b182abf87b0673890f1075c68adb30aea732b86efd6024d54a"} Mar 08 01:50:04 crc kubenswrapper[4762]: I0308 01:50:04.480048 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:04 crc kubenswrapper[4762]: I0308 01:50:04.624955 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vxx5\" (UniqueName: \"kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5\") pod \"20ca3f54-c4fa-4b6b-857c-7e988bc9d704\" (UID: \"20ca3f54-c4fa-4b6b-857c-7e988bc9d704\") " Mar 08 01:50:04 crc kubenswrapper[4762]: I0308 01:50:04.966975 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548910-x9425" event={"ID":"20ca3f54-c4fa-4b6b-857c-7e988bc9d704","Type":"ContainerDied","Data":"70eace025802f8d86acbac1d7bc65334b3c65e210622e12a87325b49234e00bc"} Mar 08 01:50:04 crc kubenswrapper[4762]: I0308 01:50:04.967023 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70eace025802f8d86acbac1d7bc65334b3c65e210622e12a87325b49234e00bc" Mar 08 01:50:04 crc kubenswrapper[4762]: I0308 01:50:04.967040 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548910-x9425" Mar 08 01:50:05 crc kubenswrapper[4762]: I0308 01:50:05.404102 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5" (OuterVolumeSpecName: "kube-api-access-2vxx5") pod "20ca3f54-c4fa-4b6b-857c-7e988bc9d704" (UID: "20ca3f54-c4fa-4b6b-857c-7e988bc9d704"). InnerVolumeSpecName "kube-api-access-2vxx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:50:05 crc kubenswrapper[4762]: I0308 01:50:05.446240 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vxx5\" (UniqueName: \"kubernetes.io/projected/20ca3f54-c4fa-4b6b-857c-7e988bc9d704-kube-api-access-2vxx5\") on node \"crc\" DevicePath \"\"" Mar 08 01:50:05 crc kubenswrapper[4762]: I0308 01:50:05.588152 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548904-ft2tq"] Mar 08 01:50:05 crc kubenswrapper[4762]: I0308 01:50:05.599568 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548904-ft2tq"] Mar 08 01:50:07 crc kubenswrapper[4762]: I0308 01:50:07.280830 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5" path="/var/lib/kubelet/pods/5bb51c4c-a105-47ae-b3dc-f9e9ba60f7f5/volumes" Mar 08 01:50:35 crc kubenswrapper[4762]: I0308 01:50:35.281097 4762 scope.go:117] "RemoveContainer" containerID="38670526d583e74f5a0c9b4b7ea0895571f2ba826249add13d8c525cb09910a9" Mar 08 01:51:42 crc kubenswrapper[4762]: I0308 01:51:42.851458 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:51:42 crc kubenswrapper[4762]: I0308 01:51:42.852195 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.159333 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548912-r28tb"] Mar 08 01:52:00 crc kubenswrapper[4762]: E0308 01:52:00.160936 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ca3f54-c4fa-4b6b-857c-7e988bc9d704" containerName="oc" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.160958 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ca3f54-c4fa-4b6b-857c-7e988bc9d704" containerName="oc" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.161381 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ca3f54-c4fa-4b6b-857c-7e988bc9d704" containerName="oc" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.162514 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.165843 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.165917 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.167382 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.173616 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548912-r28tb"] Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.257912 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8jq\" (UniqueName: \"kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq\") pod \"auto-csr-approver-29548912-r28tb\" (UID: \"af47464d-366b-4691-a51b-0851bb697897\") " pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.360840 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6z8jq\" (UniqueName: \"kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq\") pod \"auto-csr-approver-29548912-r28tb\" (UID: \"af47464d-366b-4691-a51b-0851bb697897\") " pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.381743 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z8jq\" (UniqueName: \"kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq\") pod \"auto-csr-approver-29548912-r28tb\" (UID: \"af47464d-366b-4691-a51b-0851bb697897\") " pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:00 crc kubenswrapper[4762]: I0308 01:52:00.485698 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:01 crc kubenswrapper[4762]: I0308 01:52:01.019960 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548912-r28tb"] Mar 08 01:52:01 crc kubenswrapper[4762]: W0308 01:52:01.040277 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf47464d_366b_4691_a51b_0851bb697897.slice/crio-510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7 WatchSource:0}: Error finding container 510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7: Status 404 returned error can't find the container with id 510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7 Mar 08 01:52:01 crc kubenswrapper[4762]: I0308 01:52:01.044537 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:52:01 crc kubenswrapper[4762]: I0308 01:52:01.452618 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548912-r28tb" event={"ID":"af47464d-366b-4691-a51b-0851bb697897","Type":"ContainerStarted","Data":"510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7"} Mar 08 01:52:03 crc kubenswrapper[4762]: I0308 01:52:03.477773 4762 generic.go:334] "Generic (PLEG): container finished" podID="af47464d-366b-4691-a51b-0851bb697897" containerID="fab9933e0f37e9c0c54113a09e6c3ead2acb3d7b429596e949a0fa7aa0f43571" exitCode=0 Mar 08 01:52:03 crc kubenswrapper[4762]: I0308 01:52:03.478143 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548912-r28tb" event={"ID":"af47464d-366b-4691-a51b-0851bb697897","Type":"ContainerDied","Data":"fab9933e0f37e9c0c54113a09e6c3ead2acb3d7b429596e949a0fa7aa0f43571"} Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.014484 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.082563 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z8jq\" (UniqueName: \"kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq\") pod \"af47464d-366b-4691-a51b-0851bb697897\" (UID: \"af47464d-366b-4691-a51b-0851bb697897\") " Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.089736 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq" (OuterVolumeSpecName: "kube-api-access-6z8jq") pod "af47464d-366b-4691-a51b-0851bb697897" (UID: "af47464d-366b-4691-a51b-0851bb697897"). InnerVolumeSpecName "kube-api-access-6z8jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.185416 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6z8jq\" (UniqueName: \"kubernetes.io/projected/af47464d-366b-4691-a51b-0851bb697897-kube-api-access-6z8jq\") on node \"crc\" DevicePath \"\"" Mar 08 01:52:05 crc kubenswrapper[4762]: E0308 01:52:05.384440 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf47464d_366b_4691_a51b_0851bb697897.slice\": RecentStats: unable to find data in memory cache]" Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.505590 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548912-r28tb" event={"ID":"af47464d-366b-4691-a51b-0851bb697897","Type":"ContainerDied","Data":"510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7"} Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.506004 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="510cc8ccf5fff14fd9695ad5493a3a31ce7b5b3aa8a2ec24f19c63f88af226a7" Mar 08 01:52:05 crc kubenswrapper[4762]: I0308 01:52:05.505660 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548912-r28tb" Mar 08 01:52:06 crc kubenswrapper[4762]: I0308 01:52:06.095244 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548906-wk5v9"] Mar 08 01:52:06 crc kubenswrapper[4762]: I0308 01:52:06.106898 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548906-wk5v9"] Mar 08 01:52:07 crc kubenswrapper[4762]: I0308 01:52:07.291504 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33" path="/var/lib/kubelet/pods/8e2d52b5-d5c1-42d0-900f-9ee3a7f16e33/volumes" Mar 08 01:52:12 crc kubenswrapper[4762]: I0308 01:52:12.851708 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:52:12 crc kubenswrapper[4762]: I0308 01:52:12.852163 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:52:35 crc kubenswrapper[4762]: I0308 01:52:35.422802 4762 scope.go:117] "RemoveContainer" containerID="c6d779292a39fe81f79f3722820e4037ff09e3fdbe12a541ce568c5a711970f3" Mar 08 01:52:42 crc kubenswrapper[4762]: I0308 01:52:42.852117 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 01:52:42 crc kubenswrapper[4762]: I0308 01:52:42.852695 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 01:52:42 crc kubenswrapper[4762]: I0308 01:52:42.852748 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 01:52:42 crc kubenswrapper[4762]: I0308 01:52:42.853654 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 01:52:42 crc kubenswrapper[4762]: I0308 01:52:42.853706 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" gracePeriod=600 Mar 08 01:52:42 crc kubenswrapper[4762]: E0308 01:52:42.976204 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:52:43 crc kubenswrapper[4762]: I0308 01:52:43.361894 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" exitCode=0 Mar 08 01:52:43 crc kubenswrapper[4762]: I0308 01:52:43.362494 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70"} Mar 08 01:52:43 crc kubenswrapper[4762]: I0308 01:52:43.362654 4762 scope.go:117] "RemoveContainer" containerID="2674f0d3d2914da9ebde9ec6dcca02911f6c9e8079fffe34619264632466e56f" Mar 08 01:52:43 crc kubenswrapper[4762]: I0308 01:52:43.363829 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:52:43 crc kubenswrapper[4762]: E0308 01:52:43.364571 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:52:58 crc kubenswrapper[4762]: I0308 01:52:58.264161 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:52:58 crc kubenswrapper[4762]: E0308 01:52:58.265420 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.109838 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:52:59 crc kubenswrapper[4762]: E0308 01:52:59.110594 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af47464d-366b-4691-a51b-0851bb697897" containerName="oc" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.110623 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="af47464d-366b-4691-a51b-0851bb697897" containerName="oc" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.111104 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="af47464d-366b-4691-a51b-0851bb697897" containerName="oc" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.113929 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.123479 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.210873 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.211292 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78j8q\" (UniqueName: \"kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.211415 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.315590 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78j8q\" (UniqueName: \"kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.315688 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.316068 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.317029 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.317127 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.335610 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78j8q\" (UniqueName: \"kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q\") pod \"redhat-operators-gcgp5\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.461191 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:52:59 crc kubenswrapper[4762]: I0308 01:52:59.996612 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:53:00 crc kubenswrapper[4762]: W0308 01:53:00.001654 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56c73a54_aac0_410d_8d3b_7d2e8dbed6ca.slice/crio-d15acd4b27256c0de0be1caeb8d77c905e81507ae79a182e0d7c62a146cc8418 WatchSource:0}: Error finding container d15acd4b27256c0de0be1caeb8d77c905e81507ae79a182e0d7c62a146cc8418: Status 404 returned error can't find the container with id d15acd4b27256c0de0be1caeb8d77c905e81507ae79a182e0d7c62a146cc8418 Mar 08 01:53:00 crc kubenswrapper[4762]: I0308 01:53:00.623409 4762 generic.go:334] "Generic (PLEG): container finished" podID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerID="0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409" exitCode=0 Mar 08 01:53:00 crc kubenswrapper[4762]: I0308 01:53:00.623498 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerDied","Data":"0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409"} Mar 08 01:53:00 crc kubenswrapper[4762]: I0308 01:53:00.623890 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerStarted","Data":"d15acd4b27256c0de0be1caeb8d77c905e81507ae79a182e0d7c62a146cc8418"} Mar 08 01:53:02 crc kubenswrapper[4762]: I0308 01:53:02.662883 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerStarted","Data":"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29"} Mar 08 01:53:06 crc kubenswrapper[4762]: I0308 01:53:06.709598 4762 generic.go:334] "Generic (PLEG): container finished" podID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerID="c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29" exitCode=0 Mar 08 01:53:06 crc kubenswrapper[4762]: I0308 01:53:06.709687 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerDied","Data":"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29"} Mar 08 01:53:07 crc kubenswrapper[4762]: I0308 01:53:07.723628 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerStarted","Data":"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc"} Mar 08 01:53:07 crc kubenswrapper[4762]: I0308 01:53:07.757964 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gcgp5" podStartSLOduration=2.252674486 podStartE2EDuration="8.757938209s" podCreationTimestamp="2026-03-08 01:52:59 +0000 UTC" firstStartedPulling="2026-03-08 01:53:00.628789207 +0000 UTC m=+5402.102933551" lastFinishedPulling="2026-03-08 01:53:07.13405293 +0000 UTC m=+5408.608197274" observedRunningTime="2026-03-08 01:53:07.751839331 +0000 UTC m=+5409.225983675" watchObservedRunningTime="2026-03-08 01:53:07.757938209 +0000 UTC m=+5409.232082563" Mar 08 01:53:09 crc kubenswrapper[4762]: I0308 01:53:09.273915 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:53:09 crc kubenswrapper[4762]: E0308 01:53:09.274646 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:53:09 crc kubenswrapper[4762]: I0308 01:53:09.461865 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:09 crc kubenswrapper[4762]: I0308 01:53:09.461940 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:10 crc kubenswrapper[4762]: I0308 01:53:10.548247 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcgp5" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" probeResult="failure" output=< Mar 08 01:53:10 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:53:10 crc kubenswrapper[4762]: > Mar 08 01:53:20 crc kubenswrapper[4762]: I0308 01:53:20.520425 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gcgp5" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" probeResult="failure" output=< Mar 08 01:53:20 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 01:53:20 crc kubenswrapper[4762]: > Mar 08 01:53:24 crc kubenswrapper[4762]: I0308 01:53:24.264364 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:53:24 crc kubenswrapper[4762]: E0308 01:53:24.265362 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:53:29 crc kubenswrapper[4762]: I0308 01:53:29.560233 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:29 crc kubenswrapper[4762]: I0308 01:53:29.651859 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:30 crc kubenswrapper[4762]: I0308 01:53:30.294868 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.027749 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gcgp5" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" containerID="cri-o://7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc" gracePeriod=2 Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.648319 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.816355 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78j8q\" (UniqueName: \"kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q\") pod \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.816498 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content\") pod \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.816599 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities\") pod \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\" (UID: \"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca\") " Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.818004 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities" (OuterVolumeSpecName: "utilities") pod "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" (UID: "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.825814 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q" (OuterVolumeSpecName: "kube-api-access-78j8q") pod "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" (UID: "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca"). InnerVolumeSpecName "kube-api-access-78j8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.919279 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.919309 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78j8q\" (UniqueName: \"kubernetes.io/projected/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-kube-api-access-78j8q\") on node \"crc\" DevicePath \"\"" Mar 08 01:53:31 crc kubenswrapper[4762]: I0308 01:53:31.995302 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" (UID: "56c73a54-aac0-410d-8d3b-7d2e8dbed6ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.021371 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.038501 4762 generic.go:334] "Generic (PLEG): container finished" podID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerID="7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc" exitCode=0 Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.038551 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerDied","Data":"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc"} Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.038581 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gcgp5" event={"ID":"56c73a54-aac0-410d-8d3b-7d2e8dbed6ca","Type":"ContainerDied","Data":"d15acd4b27256c0de0be1caeb8d77c905e81507ae79a182e0d7c62a146cc8418"} Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.038578 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gcgp5" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.038594 4762 scope.go:117] "RemoveContainer" containerID="7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.073819 4762 scope.go:117] "RemoveContainer" containerID="c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.075692 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.085412 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gcgp5"] Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.101412 4762 scope.go:117] "RemoveContainer" containerID="0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.158400 4762 scope.go:117] "RemoveContainer" containerID="7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc" Mar 08 01:53:32 crc kubenswrapper[4762]: E0308 01:53:32.158933 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc\": container with ID starting with 7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc not found: ID does not exist" containerID="7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.158971 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc"} err="failed to get container status \"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc\": rpc error: code = NotFound desc = could not find container \"7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc\": container with ID starting with 7f57a748d3cd20cf004d7773015ecc75bc93517136afb9164d004d96c92db3dc not found: ID does not exist" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.158999 4762 scope.go:117] "RemoveContainer" containerID="c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29" Mar 08 01:53:32 crc kubenswrapper[4762]: E0308 01:53:32.159363 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29\": container with ID starting with c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29 not found: ID does not exist" containerID="c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.159404 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29"} err="failed to get container status \"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29\": rpc error: code = NotFound desc = could not find container \"c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29\": container with ID starting with c4af52c82102ff97cde60d610716d38efcf48bb41b1148a17ec76d97cd947a29 not found: ID does not exist" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.159432 4762 scope.go:117] "RemoveContainer" containerID="0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409" Mar 08 01:53:32 crc kubenswrapper[4762]: E0308 01:53:32.159782 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409\": container with ID starting with 0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409 not found: ID does not exist" containerID="0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409" Mar 08 01:53:32 crc kubenswrapper[4762]: I0308 01:53:32.159815 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409"} err="failed to get container status \"0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409\": rpc error: code = NotFound desc = could not find container \"0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409\": container with ID starting with 0964bcd0a4d04485012016f075d24a6eca600488e4de08c00ddc09bfdb366409 not found: ID does not exist" Mar 08 01:53:33 crc kubenswrapper[4762]: I0308 01:53:33.274745 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" path="/var/lib/kubelet/pods/56c73a54-aac0-410d-8d3b-7d2e8dbed6ca/volumes" Mar 08 01:53:35 crc kubenswrapper[4762]: I0308 01:53:35.263957 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:53:35 crc kubenswrapper[4762]: E0308 01:53:35.264830 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:53:46 crc kubenswrapper[4762]: I0308 01:53:46.264282 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:53:46 crc kubenswrapper[4762]: E0308 01:53:46.265115 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:53:59 crc kubenswrapper[4762]: I0308 01:53:59.283977 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:53:59 crc kubenswrapper[4762]: E0308 01:53:59.284870 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.181814 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548914-cgh7w"] Mar 08 01:54:00 crc kubenswrapper[4762]: E0308 01:54:00.183065 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.183225 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" Mar 08 01:54:00 crc kubenswrapper[4762]: E0308 01:54:00.183379 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="extract-utilities" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.183506 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="extract-utilities" Mar 08 01:54:00 crc kubenswrapper[4762]: E0308 01:54:00.183681 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="extract-content" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.183829 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="extract-content" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.184383 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c73a54-aac0-410d-8d3b-7d2e8dbed6ca" containerName="registry-server" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.186020 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.189794 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.190348 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.190920 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.195060 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548914-cgh7w"] Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.373927 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccgf7\" (UniqueName: \"kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7\") pod \"auto-csr-approver-29548914-cgh7w\" (UID: \"de6847d8-05f2-4774-8643-940cc054d210\") " pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.476413 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccgf7\" (UniqueName: \"kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7\") pod \"auto-csr-approver-29548914-cgh7w\" (UID: \"de6847d8-05f2-4774-8643-940cc054d210\") " pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.505676 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccgf7\" (UniqueName: \"kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7\") pod \"auto-csr-approver-29548914-cgh7w\" (UID: \"de6847d8-05f2-4774-8643-940cc054d210\") " pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:00 crc kubenswrapper[4762]: I0308 01:54:00.805880 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:01 crc kubenswrapper[4762]: I0308 01:54:01.324662 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548914-cgh7w"] Mar 08 01:54:01 crc kubenswrapper[4762]: I0308 01:54:01.441715 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" event={"ID":"de6847d8-05f2-4774-8643-940cc054d210","Type":"ContainerStarted","Data":"496ea2411bb06d6315cd6cca1b2d851a4d1f890fe5695488845914c173f188e9"} Mar 08 01:54:03 crc kubenswrapper[4762]: I0308 01:54:03.462620 4762 generic.go:334] "Generic (PLEG): container finished" podID="de6847d8-05f2-4774-8643-940cc054d210" containerID="d4225af5d925b0c5d7162ef844f77e5183e1f6a53de78fede3ee99b6e00cfdf0" exitCode=0 Mar 08 01:54:03 crc kubenswrapper[4762]: I0308 01:54:03.462799 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" event={"ID":"de6847d8-05f2-4774-8643-940cc054d210","Type":"ContainerDied","Data":"d4225af5d925b0c5d7162ef844f77e5183e1f6a53de78fede3ee99b6e00cfdf0"} Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.012021 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.200418 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccgf7\" (UniqueName: \"kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7\") pod \"de6847d8-05f2-4774-8643-940cc054d210\" (UID: \"de6847d8-05f2-4774-8643-940cc054d210\") " Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.211883 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7" (OuterVolumeSpecName: "kube-api-access-ccgf7") pod "de6847d8-05f2-4774-8643-940cc054d210" (UID: "de6847d8-05f2-4774-8643-940cc054d210"). InnerVolumeSpecName "kube-api-access-ccgf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.305732 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccgf7\" (UniqueName: \"kubernetes.io/projected/de6847d8-05f2-4774-8643-940cc054d210-kube-api-access-ccgf7\") on node \"crc\" DevicePath \"\"" Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.491912 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" event={"ID":"de6847d8-05f2-4774-8643-940cc054d210","Type":"ContainerDied","Data":"496ea2411bb06d6315cd6cca1b2d851a4d1f890fe5695488845914c173f188e9"} Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.492181 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="496ea2411bb06d6315cd6cca1b2d851a4d1f890fe5695488845914c173f188e9" Mar 08 01:54:05 crc kubenswrapper[4762]: I0308 01:54:05.491961 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548914-cgh7w" Mar 08 01:54:06 crc kubenswrapper[4762]: I0308 01:54:06.107385 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548908-s7sbm"] Mar 08 01:54:06 crc kubenswrapper[4762]: I0308 01:54:06.125666 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548908-s7sbm"] Mar 08 01:54:07 crc kubenswrapper[4762]: I0308 01:54:07.279736 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6ac93f4-5941-4071-9e63-8422db34deeb" path="/var/lib/kubelet/pods/b6ac93f4-5941-4071-9e63-8422db34deeb/volumes" Mar 08 01:54:10 crc kubenswrapper[4762]: I0308 01:54:10.263882 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:54:10 crc kubenswrapper[4762]: E0308 01:54:10.265340 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:54:22 crc kubenswrapper[4762]: I0308 01:54:22.263376 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:54:22 crc kubenswrapper[4762]: E0308 01:54:22.264173 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:54:35 crc kubenswrapper[4762]: I0308 01:54:35.263790 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:54:35 crc kubenswrapper[4762]: E0308 01:54:35.264666 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:54:35 crc kubenswrapper[4762]: I0308 01:54:35.571497 4762 scope.go:117] "RemoveContainer" containerID="7e635412e5066a1e546bca548b7dfd3ebab749dd42262b8553cc92b89b9d6bec" Mar 08 01:54:48 crc kubenswrapper[4762]: I0308 01:54:48.265195 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:54:48 crc kubenswrapper[4762]: E0308 01:54:48.267296 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:55:03 crc kubenswrapper[4762]: I0308 01:55:03.263626 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:55:03 crc kubenswrapper[4762]: E0308 01:55:03.264735 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:55:14 crc kubenswrapper[4762]: I0308 01:55:14.263869 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:55:14 crc kubenswrapper[4762]: E0308 01:55:14.264655 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:55:28 crc kubenswrapper[4762]: I0308 01:55:28.263336 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:55:28 crc kubenswrapper[4762]: E0308 01:55:28.265604 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:55:40 crc kubenswrapper[4762]: I0308 01:55:40.264324 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:55:40 crc kubenswrapper[4762]: E0308 01:55:40.265592 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:55:53 crc kubenswrapper[4762]: I0308 01:55:53.263909 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:55:53 crc kubenswrapper[4762]: E0308 01:55:53.264745 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.167409 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548916-qv2ld"] Mar 08 01:56:00 crc kubenswrapper[4762]: E0308 01:56:00.168485 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de6847d8-05f2-4774-8643-940cc054d210" containerName="oc" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.168499 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="de6847d8-05f2-4774-8643-940cc054d210" containerName="oc" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.168691 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="de6847d8-05f2-4774-8643-940cc054d210" containerName="oc" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.169450 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.172262 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.174589 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.174850 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.188565 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548916-qv2ld"] Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.285130 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnww\" (UniqueName: \"kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww\") pod \"auto-csr-approver-29548916-qv2ld\" (UID: \"e81de951-6da9-4f87-81e2-21dbcd5eec1c\") " pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:00 crc kubenswrapper[4762]: I0308 01:56:00.388612 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cnww\" (UniqueName: \"kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww\") pod \"auto-csr-approver-29548916-qv2ld\" (UID: \"e81de951-6da9-4f87-81e2-21dbcd5eec1c\") " pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:01 crc kubenswrapper[4762]: I0308 01:56:01.309787 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cnww\" (UniqueName: \"kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww\") pod \"auto-csr-approver-29548916-qv2ld\" (UID: \"e81de951-6da9-4f87-81e2-21dbcd5eec1c\") " pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:01 crc kubenswrapper[4762]: I0308 01:56:01.406022 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:01 crc kubenswrapper[4762]: I0308 01:56:01.952653 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548916-qv2ld"] Mar 08 01:56:02 crc kubenswrapper[4762]: I0308 01:56:02.076349 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" event={"ID":"e81de951-6da9-4f87-81e2-21dbcd5eec1c","Type":"ContainerStarted","Data":"ce3b248c8eec379e985962b9907896f301465631e0f425082bafec7acb0392c3"} Mar 08 01:56:04 crc kubenswrapper[4762]: I0308 01:56:04.105519 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" event={"ID":"e81de951-6da9-4f87-81e2-21dbcd5eec1c","Type":"ContainerStarted","Data":"06fc938cd18314a4fce4a56b097af49118d1527026bfeb87e840f99dcec32377"} Mar 08 01:56:04 crc kubenswrapper[4762]: I0308 01:56:04.120475 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" podStartSLOduration=3.200194975 podStartE2EDuration="4.120460139s" podCreationTimestamp="2026-03-08 01:56:00 +0000 UTC" firstStartedPulling="2026-03-08 01:56:01.972717842 +0000 UTC m=+5583.446862186" lastFinishedPulling="2026-03-08 01:56:02.892982996 +0000 UTC m=+5584.367127350" observedRunningTime="2026-03-08 01:56:04.117726735 +0000 UTC m=+5585.591871079" watchObservedRunningTime="2026-03-08 01:56:04.120460139 +0000 UTC m=+5585.594604483" Mar 08 01:56:04 crc kubenswrapper[4762]: I0308 01:56:04.264944 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:56:04 crc kubenswrapper[4762]: E0308 01:56:04.265425 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:56:05 crc kubenswrapper[4762]: I0308 01:56:05.120742 4762 generic.go:334] "Generic (PLEG): container finished" podID="e81de951-6da9-4f87-81e2-21dbcd5eec1c" containerID="06fc938cd18314a4fce4a56b097af49118d1527026bfeb87e840f99dcec32377" exitCode=0 Mar 08 01:56:05 crc kubenswrapper[4762]: I0308 01:56:05.120846 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" event={"ID":"e81de951-6da9-4f87-81e2-21dbcd5eec1c","Type":"ContainerDied","Data":"06fc938cd18314a4fce4a56b097af49118d1527026bfeb87e840f99dcec32377"} Mar 08 01:56:06 crc kubenswrapper[4762]: I0308 01:56:06.613946 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:06 crc kubenswrapper[4762]: I0308 01:56:06.658410 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cnww\" (UniqueName: \"kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww\") pod \"e81de951-6da9-4f87-81e2-21dbcd5eec1c\" (UID: \"e81de951-6da9-4f87-81e2-21dbcd5eec1c\") " Mar 08 01:56:06 crc kubenswrapper[4762]: I0308 01:56:06.677999 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww" (OuterVolumeSpecName: "kube-api-access-5cnww") pod "e81de951-6da9-4f87-81e2-21dbcd5eec1c" (UID: "e81de951-6da9-4f87-81e2-21dbcd5eec1c"). InnerVolumeSpecName "kube-api-access-5cnww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:56:06 crc kubenswrapper[4762]: I0308 01:56:06.761725 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cnww\" (UniqueName: \"kubernetes.io/projected/e81de951-6da9-4f87-81e2-21dbcd5eec1c-kube-api-access-5cnww\") on node \"crc\" DevicePath \"\"" Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.152995 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" event={"ID":"e81de951-6da9-4f87-81e2-21dbcd5eec1c","Type":"ContainerDied","Data":"ce3b248c8eec379e985962b9907896f301465631e0f425082bafec7acb0392c3"} Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.153055 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce3b248c8eec379e985962b9907896f301465631e0f425082bafec7acb0392c3" Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.153064 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548916-qv2ld" Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.208385 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548910-x9425"] Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.221074 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548910-x9425"] Mar 08 01:56:07 crc kubenswrapper[4762]: I0308 01:56:07.284143 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ca3f54-c4fa-4b6b-857c-7e988bc9d704" path="/var/lib/kubelet/pods/20ca3f54-c4fa-4b6b-857c-7e988bc9d704/volumes" Mar 08 01:56:16 crc kubenswrapper[4762]: I0308 01:56:16.264726 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:56:16 crc kubenswrapper[4762]: E0308 01:56:16.265953 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:56:27 crc kubenswrapper[4762]: I0308 01:56:27.270078 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:56:27 crc kubenswrapper[4762]: E0308 01:56:27.271295 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:56:35 crc kubenswrapper[4762]: I0308 01:56:35.729503 4762 scope.go:117] "RemoveContainer" containerID="87e2c90b76c423b182abf87b0673890f1075c68adb30aea732b86efd6024d54a" Mar 08 01:56:39 crc kubenswrapper[4762]: E0308 01:56:39.453173 4762 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:42138->38.102.83.196:38853: write tcp 38.102.83.196:42138->38.102.83.196:38853: write: broken pipe Mar 08 01:56:41 crc kubenswrapper[4762]: I0308 01:56:41.263905 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:56:41 crc kubenswrapper[4762]: E0308 01:56:41.264518 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:56:53 crc kubenswrapper[4762]: I0308 01:56:53.263243 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:56:53 crc kubenswrapper[4762]: E0308 01:56:53.264049 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:57:08 crc kubenswrapper[4762]: I0308 01:57:08.264452 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:57:08 crc kubenswrapper[4762]: E0308 01:57:08.265588 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:57:20 crc kubenswrapper[4762]: I0308 01:57:20.264290 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:57:20 crc kubenswrapper[4762]: E0308 01:57:20.265167 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:57:31 crc kubenswrapper[4762]: I0308 01:57:31.265481 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:57:31 crc kubenswrapper[4762]: E0308 01:57:31.266511 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 01:57:44 crc kubenswrapper[4762]: I0308 01:57:44.265320 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 01:57:45 crc kubenswrapper[4762]: I0308 01:57:45.433978 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b"} Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.180268 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548918-m4mk6"] Mar 08 01:58:00 crc kubenswrapper[4762]: E0308 01:58:00.181699 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e81de951-6da9-4f87-81e2-21dbcd5eec1c" containerName="oc" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.181746 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e81de951-6da9-4f87-81e2-21dbcd5eec1c" containerName="oc" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.182181 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e81de951-6da9-4f87-81e2-21dbcd5eec1c" containerName="oc" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.183493 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.186337 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.186821 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.189086 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.206148 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548918-m4mk6"] Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.330312 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hf6l\" (UniqueName: \"kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l\") pod \"auto-csr-approver-29548918-m4mk6\" (UID: \"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59\") " pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.433542 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hf6l\" (UniqueName: \"kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l\") pod \"auto-csr-approver-29548918-m4mk6\" (UID: \"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59\") " pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.466974 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hf6l\" (UniqueName: \"kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l\") pod \"auto-csr-approver-29548918-m4mk6\" (UID: \"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59\") " pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:00 crc kubenswrapper[4762]: I0308 01:58:00.505671 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:01 crc kubenswrapper[4762]: I0308 01:58:01.037900 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548918-m4mk6"] Mar 08 01:58:01 crc kubenswrapper[4762]: I0308 01:58:01.048210 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 01:58:01 crc kubenswrapper[4762]: I0308 01:58:01.625220 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" event={"ID":"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59","Type":"ContainerStarted","Data":"51dc26f04000d848320427f8016093003a58b1b02420a5fc708be393e4f400a8"} Mar 08 01:58:02 crc kubenswrapper[4762]: I0308 01:58:02.637732 4762 generic.go:334] "Generic (PLEG): container finished" podID="e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" containerID="a9e3c566b97b8dba3e1514c64e34c3ed4d4126771d40199e4651d826da4304d2" exitCode=0 Mar 08 01:58:02 crc kubenswrapper[4762]: I0308 01:58:02.637843 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" event={"ID":"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59","Type":"ContainerDied","Data":"a9e3c566b97b8dba3e1514c64e34c3ed4d4126771d40199e4651d826da4304d2"} Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.147970 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.238209 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hf6l\" (UniqueName: \"kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l\") pod \"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59\" (UID: \"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59\") " Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.247938 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l" (OuterVolumeSpecName: "kube-api-access-2hf6l") pod "e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" (UID: "e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59"). InnerVolumeSpecName "kube-api-access-2hf6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.342061 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hf6l\" (UniqueName: \"kubernetes.io/projected/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59-kube-api-access-2hf6l\") on node \"crc\" DevicePath \"\"" Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.671294 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.671280 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548918-m4mk6" event={"ID":"e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59","Type":"ContainerDied","Data":"51dc26f04000d848320427f8016093003a58b1b02420a5fc708be393e4f400a8"} Mar 08 01:58:04 crc kubenswrapper[4762]: I0308 01:58:04.671480 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51dc26f04000d848320427f8016093003a58b1b02420a5fc708be393e4f400a8" Mar 08 01:58:05 crc kubenswrapper[4762]: I0308 01:58:05.257809 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548912-r28tb"] Mar 08 01:58:05 crc kubenswrapper[4762]: I0308 01:58:05.281872 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548912-r28tb"] Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.481125 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Mar 08 01:58:06 crc kubenswrapper[4762]: E0308 01:58:06.481650 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" containerName="oc" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.481666 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" containerName="oc" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.482078 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" containerName="oc" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.483354 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.485850 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.487010 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hw7xm" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.490170 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.491916 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.500345 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.597902 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598099 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598241 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598323 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598375 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vpx\" (UniqueName: \"kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598510 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598610 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598670 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.598718 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701298 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701420 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701466 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701507 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47vpx\" (UniqueName: \"kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701576 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701645 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701682 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701725 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.701899 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.702621 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.702919 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.703902 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.704964 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.705905 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.712161 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.712630 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.714427 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.724929 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47vpx\" (UniqueName: \"kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.752334 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " pod="openstack/tempest-tests-tempest" Mar 08 01:58:06 crc kubenswrapper[4762]: I0308 01:58:06.819262 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 08 01:58:07 crc kubenswrapper[4762]: I0308 01:58:07.279639 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af47464d-366b-4691-a51b-0851bb697897" path="/var/lib/kubelet/pods/af47464d-366b-4691-a51b-0851bb697897/volumes" Mar 08 01:58:07 crc kubenswrapper[4762]: I0308 01:58:07.349010 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Mar 08 01:58:07 crc kubenswrapper[4762]: I0308 01:58:07.704919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b14c85df-f56a-4a30-bf25-0f41cd88b32d","Type":"ContainerStarted","Data":"b30da89c9562692c811db048d05efc17f8a3a418990f14e084d07f2276656924"} Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.365837 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.371994 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.412308 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.472797 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frjg\" (UniqueName: \"kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.472904 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.472997 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.575047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frjg\" (UniqueName: \"kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.575192 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.575298 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.576038 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.576330 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.618579 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frjg\" (UniqueName: \"kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg\") pod \"certified-operators-kp4v8\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:27 crc kubenswrapper[4762]: I0308 01:58:27.703865 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.240925 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.246808 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.258348 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.335152 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7ld\" (UniqueName: \"kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.337367 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.337494 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.439993 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7ld\" (UniqueName: \"kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.440121 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.440155 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.440686 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.440680 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.458285 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7ld\" (UniqueName: \"kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld\") pod \"redhat-marketplace-z62ws\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:30 crc kubenswrapper[4762]: I0308 01:58:30.584490 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:58:35 crc kubenswrapper[4762]: I0308 01:58:35.874928 4762 scope.go:117] "RemoveContainer" containerID="fab9933e0f37e9c0c54113a09e6c3ead2acb3d7b429596e949a0fa7aa0f43571" Mar 08 01:58:47 crc kubenswrapper[4762]: E0308 01:58:47.092463 4762 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Mar 08 01:58:47 crc kubenswrapper[4762]: E0308 01:58:47.096530 4762 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47vpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b14c85df-f56a-4a30-bf25-0f41cd88b32d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 08 01:58:47 crc kubenswrapper[4762]: E0308 01:58:47.098531 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" Mar 08 01:58:47 crc kubenswrapper[4762]: E0308 01:58:47.220403 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" Mar 08 01:58:47 crc kubenswrapper[4762]: I0308 01:58:47.680702 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:58:47 crc kubenswrapper[4762]: I0308 01:58:47.698854 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.224752 4762 generic.go:334] "Generic (PLEG): container finished" podID="6e825898-237a-4363-ba9c-d83e43cb9063" containerID="47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434" exitCode=0 Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.225025 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerDied","Data":"47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434"} Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.225049 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerStarted","Data":"58488f23e7b091aa3979aca2332aca984f3a0376b6f84c3d895c155cc3bc93bf"} Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.228071 4762 generic.go:334] "Generic (PLEG): container finished" podID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerID="687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9" exitCode=0 Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.228093 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerDied","Data":"687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9"} Mar 08 01:58:48 crc kubenswrapper[4762]: I0308 01:58:48.228111 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerStarted","Data":"41112cebff28d1c28f68ed75b1c20c6334980656964501be395f5b04518d456f"} Mar 08 01:58:49 crc kubenswrapper[4762]: I0308 01:58:49.241818 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerStarted","Data":"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67"} Mar 08 01:58:49 crc kubenswrapper[4762]: I0308 01:58:49.244864 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerStarted","Data":"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4"} Mar 08 01:58:51 crc kubenswrapper[4762]: I0308 01:58:51.268322 4762 generic.go:334] "Generic (PLEG): container finished" podID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerID="61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4" exitCode=0 Mar 08 01:58:51 crc kubenswrapper[4762]: I0308 01:58:51.282565 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerDied","Data":"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4"} Mar 08 01:58:53 crc kubenswrapper[4762]: I0308 01:58:53.295055 4762 generic.go:334] "Generic (PLEG): container finished" podID="6e825898-237a-4363-ba9c-d83e43cb9063" containerID="7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67" exitCode=0 Mar 08 01:58:53 crc kubenswrapper[4762]: I0308 01:58:53.295146 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerDied","Data":"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67"} Mar 08 01:58:53 crc kubenswrapper[4762]: I0308 01:58:53.298588 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerStarted","Data":"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d"} Mar 08 01:58:53 crc kubenswrapper[4762]: I0308 01:58:53.338778 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-z62ws" podStartSLOduration=19.461432529 podStartE2EDuration="23.338750382s" podCreationTimestamp="2026-03-08 01:58:30 +0000 UTC" firstStartedPulling="2026-03-08 01:58:48.229318866 +0000 UTC m=+5749.703463210" lastFinishedPulling="2026-03-08 01:58:52.106636699 +0000 UTC m=+5753.580781063" observedRunningTime="2026-03-08 01:58:53.330470377 +0000 UTC m=+5754.804614721" watchObservedRunningTime="2026-03-08 01:58:53.338750382 +0000 UTC m=+5754.812894726" Mar 08 01:58:54 crc kubenswrapper[4762]: I0308 01:58:54.322109 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerStarted","Data":"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871"} Mar 08 01:58:54 crc kubenswrapper[4762]: I0308 01:58:54.362777 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kp4v8" podStartSLOduration=21.879353504 podStartE2EDuration="27.362729401s" podCreationTimestamp="2026-03-08 01:58:27 +0000 UTC" firstStartedPulling="2026-03-08 01:58:48.226438978 +0000 UTC m=+5749.700583312" lastFinishedPulling="2026-03-08 01:58:53.709814825 +0000 UTC m=+5755.183959209" observedRunningTime="2026-03-08 01:58:54.346318966 +0000 UTC m=+5755.820463350" watchObservedRunningTime="2026-03-08 01:58:54.362729401 +0000 UTC m=+5755.836873755" Mar 08 01:58:57 crc kubenswrapper[4762]: I0308 01:58:57.704350 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:57 crc kubenswrapper[4762]: I0308 01:58:57.704810 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:57 crc kubenswrapper[4762]: I0308 01:58:57.776172 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:58 crc kubenswrapper[4762]: I0308 01:58:58.417977 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:58:58 crc kubenswrapper[4762]: I0308 01:58:58.905004 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:59:00 crc kubenswrapper[4762]: I0308 01:59:00.398073 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kp4v8" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="registry-server" containerID="cri-o://1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871" gracePeriod=2 Mar 08 01:59:00 crc kubenswrapper[4762]: I0308 01:59:00.595002 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:00 crc kubenswrapper[4762]: I0308 01:59:00.595054 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:00 crc kubenswrapper[4762]: I0308 01:59:00.680664 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.065498 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.142744 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frjg\" (UniqueName: \"kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg\") pod \"6e825898-237a-4363-ba9c-d83e43cb9063\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.143658 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content\") pod \"6e825898-237a-4363-ba9c-d83e43cb9063\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.144379 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities\") pod \"6e825898-237a-4363-ba9c-d83e43cb9063\" (UID: \"6e825898-237a-4363-ba9c-d83e43cb9063\") " Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.145283 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities" (OuterVolumeSpecName: "utilities") pod "6e825898-237a-4363-ba9c-d83e43cb9063" (UID: "6e825898-237a-4363-ba9c-d83e43cb9063"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.145621 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.164889 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg" (OuterVolumeSpecName: "kube-api-access-9frjg") pod "6e825898-237a-4363-ba9c-d83e43cb9063" (UID: "6e825898-237a-4363-ba9c-d83e43cb9063"). InnerVolumeSpecName "kube-api-access-9frjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.211640 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e825898-237a-4363-ba9c-d83e43cb9063" (UID: "6e825898-237a-4363-ba9c-d83e43cb9063"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.248170 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frjg\" (UniqueName: \"kubernetes.io/projected/6e825898-237a-4363-ba9c-d83e43cb9063-kube-api-access-9frjg\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.248202 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e825898-237a-4363-ba9c-d83e43cb9063-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.417354 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b14c85df-f56a-4a30-bf25-0f41cd88b32d","Type":"ContainerStarted","Data":"6e746020429a15995843491f803dbdedbba8eb5686eeddfc5a22aec3fbd37be1"} Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.422299 4762 generic.go:334] "Generic (PLEG): container finished" podID="6e825898-237a-4363-ba9c-d83e43cb9063" containerID="1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871" exitCode=0 Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.422435 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kp4v8" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.422425 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerDied","Data":"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871"} Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.422520 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kp4v8" event={"ID":"6e825898-237a-4363-ba9c-d83e43cb9063","Type":"ContainerDied","Data":"58488f23e7b091aa3979aca2332aca984f3a0376b6f84c3d895c155cc3bc93bf"} Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.422554 4762 scope.go:117] "RemoveContainer" containerID="1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.453460 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.10436333 podStartE2EDuration="56.453430401s" podCreationTimestamp="2026-03-08 01:58:05 +0000 UTC" firstStartedPulling="2026-03-08 01:58:07.351886264 +0000 UTC m=+5708.826030618" lastFinishedPulling="2026-03-08 01:58:58.700953325 +0000 UTC m=+5760.175097689" observedRunningTime="2026-03-08 01:59:01.443220266 +0000 UTC m=+5762.917364640" watchObservedRunningTime="2026-03-08 01:59:01.453430401 +0000 UTC m=+5762.927574755" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.470531 4762 scope.go:117] "RemoveContainer" containerID="7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67" Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.482718 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:59:01 crc kubenswrapper[4762]: I0308 01:59:01.493476 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kp4v8"] Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.228144 4762 scope.go:117] "RemoveContainer" containerID="47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.348504 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.392399 4762 scope.go:117] "RemoveContainer" containerID="1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871" Mar 08 01:59:02 crc kubenswrapper[4762]: E0308 01:59:02.393061 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871\": container with ID starting with 1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871 not found: ID does not exist" containerID="1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.393104 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871"} err="failed to get container status \"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871\": rpc error: code = NotFound desc = could not find container \"1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871\": container with ID starting with 1971727fbbaf575df1dee360e05f20cc715477901450a19cd98bcbe8bfa86871 not found: ID does not exist" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.393131 4762 scope.go:117] "RemoveContainer" containerID="7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67" Mar 08 01:59:02 crc kubenswrapper[4762]: E0308 01:59:02.393454 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67\": container with ID starting with 7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67 not found: ID does not exist" containerID="7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.393478 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67"} err="failed to get container status \"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67\": rpc error: code = NotFound desc = could not find container \"7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67\": container with ID starting with 7dc59dd14cce10308bf39cf45ccef34f06054aaf55c04e0b2ccddb7a4f7e8d67 not found: ID does not exist" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.393490 4762 scope.go:117] "RemoveContainer" containerID="47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434" Mar 08 01:59:02 crc kubenswrapper[4762]: E0308 01:59:02.393849 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434\": container with ID starting with 47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434 not found: ID does not exist" containerID="47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434" Mar 08 01:59:02 crc kubenswrapper[4762]: I0308 01:59:02.393869 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434"} err="failed to get container status \"47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434\": rpc error: code = NotFound desc = could not find container \"47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434\": container with ID starting with 47ba01f59fbcc994c4c78f60120c83e0389b1e1be94a1125ad1cca91fdd20434 not found: ID does not exist" Mar 08 01:59:03 crc kubenswrapper[4762]: I0308 01:59:03.282995 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" path="/var/lib/kubelet/pods/6e825898-237a-4363-ba9c-d83e43cb9063/volumes" Mar 08 01:59:03 crc kubenswrapper[4762]: I0308 01:59:03.294879 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:59:03 crc kubenswrapper[4762]: I0308 01:59:03.444303 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-z62ws" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="registry-server" containerID="cri-o://341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d" gracePeriod=2 Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.076447 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.117236 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities\") pod \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.117438 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7ld\" (UniqueName: \"kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld\") pod \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.117573 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content\") pod \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\" (UID: \"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f\") " Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.118126 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities" (OuterVolumeSpecName: "utilities") pod "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" (UID: "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.118340 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.125053 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld" (OuterVolumeSpecName: "kube-api-access-5t7ld") pod "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" (UID: "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f"). InnerVolumeSpecName "kube-api-access-5t7ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.153327 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" (UID: "5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.219853 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7ld\" (UniqueName: \"kubernetes.io/projected/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-kube-api-access-5t7ld\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.219888 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.459399 4762 generic.go:334] "Generic (PLEG): container finished" podID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerID="341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d" exitCode=0 Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.459453 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerDied","Data":"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d"} Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.459493 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-z62ws" event={"ID":"5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f","Type":"ContainerDied","Data":"41112cebff28d1c28f68ed75b1c20c6334980656964501be395f5b04518d456f"} Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.459522 4762 scope.go:117] "RemoveContainer" containerID="341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.459668 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-z62ws" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.499566 4762 scope.go:117] "RemoveContainer" containerID="61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.511939 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.524530 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-z62ws"] Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.535303 4762 scope.go:117] "RemoveContainer" containerID="687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.588329 4762 scope.go:117] "RemoveContainer" containerID="341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d" Mar 08 01:59:04 crc kubenswrapper[4762]: E0308 01:59:04.588911 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d\": container with ID starting with 341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d not found: ID does not exist" containerID="341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.588966 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d"} err="failed to get container status \"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d\": rpc error: code = NotFound desc = could not find container \"341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d\": container with ID starting with 341d71249b676dd8773ec3862f766d6f8142e540859af585290bb46e8f68366d not found: ID does not exist" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.589006 4762 scope.go:117] "RemoveContainer" containerID="61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4" Mar 08 01:59:04 crc kubenswrapper[4762]: E0308 01:59:04.589405 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4\": container with ID starting with 61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4 not found: ID does not exist" containerID="61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.589459 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4"} err="failed to get container status \"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4\": rpc error: code = NotFound desc = could not find container \"61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4\": container with ID starting with 61b2b358571672b2e91695c376891d722c7e8f721954cd8ee2ee419c586750b4 not found: ID does not exist" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.589494 4762 scope.go:117] "RemoveContainer" containerID="687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9" Mar 08 01:59:04 crc kubenswrapper[4762]: E0308 01:59:04.589837 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9\": container with ID starting with 687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9 not found: ID does not exist" containerID="687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9" Mar 08 01:59:04 crc kubenswrapper[4762]: I0308 01:59:04.589874 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9"} err="failed to get container status \"687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9\": rpc error: code = NotFound desc = could not find container \"687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9\": container with ID starting with 687a2058552a1cc675f990ea48015662ee6a2191003005656e43609b3fa9a1f9 not found: ID does not exist" Mar 08 01:59:05 crc kubenswrapper[4762]: I0308 01:59:05.279447 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" path="/var/lib/kubelet/pods/5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f/volumes" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.248561 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq"] Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.250647 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="extract-utilities" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.250671 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="extract-utilities" Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.251264 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.251281 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.251301 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="extract-content" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.251310 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="extract-content" Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.251364 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="extract-content" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.251375 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="extract-content" Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.251389 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="extract-utilities" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.251397 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="extract-utilities" Mar 08 02:00:00 crc kubenswrapper[4762]: E0308 02:00:00.251407 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.251415 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.252454 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e825898-237a-4363-ba9c-d83e43cb9063" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.252914 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eec4bf1-cb65-4143-8c12-7bbd7fcd5f5f" containerName="registry-server" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.259071 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.263863 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548920-pf9rj"] Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.265738 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.289634 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.289775 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.292647 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.293334 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.293398 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.300930 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548920-pf9rj"] Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.321858 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq"] Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.414383 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8ll5\" (UniqueName: \"kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.414470 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.414649 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.415011 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tkb\" (UniqueName: \"kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb\") pod \"auto-csr-approver-29548920-pf9rj\" (UID: \"313b0ee7-5304-4ad5-a676-84faabdbfdd8\") " pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.517444 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2tkb\" (UniqueName: \"kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb\") pod \"auto-csr-approver-29548920-pf9rj\" (UID: \"313b0ee7-5304-4ad5-a676-84faabdbfdd8\") " pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.517597 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8ll5\" (UniqueName: \"kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.517674 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.517707 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.527361 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.549232 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8ll5\" (UniqueName: \"kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.549681 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2tkb\" (UniqueName: \"kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb\") pod \"auto-csr-approver-29548920-pf9rj\" (UID: \"313b0ee7-5304-4ad5-a676-84faabdbfdd8\") " pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.550153 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume\") pod \"collect-profiles-29548920-vczsq\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.624002 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:00 crc kubenswrapper[4762]: I0308 02:00:00.633963 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:01 crc kubenswrapper[4762]: I0308 02:00:01.705581 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq"] Mar 08 02:00:01 crc kubenswrapper[4762]: W0308 02:00:01.722253 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ea1a87_c3e1_46e8_9f95_7851c84feacf.slice/crio-377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d WatchSource:0}: Error finding container 377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d: Status 404 returned error can't find the container with id 377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d Mar 08 02:00:01 crc kubenswrapper[4762]: I0308 02:00:01.758264 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548920-pf9rj"] Mar 08 02:00:02 crc kubenswrapper[4762]: I0308 02:00:02.103853 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" event={"ID":"86ea1a87-c3e1-46e8-9f95-7851c84feacf","Type":"ContainerStarted","Data":"599c8f46187a6df619cf6e1f30b07cc70687e653c0833f23997550e7a46ae978"} Mar 08 02:00:02 crc kubenswrapper[4762]: I0308 02:00:02.104223 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" event={"ID":"86ea1a87-c3e1-46e8-9f95-7851c84feacf","Type":"ContainerStarted","Data":"377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d"} Mar 08 02:00:02 crc kubenswrapper[4762]: I0308 02:00:02.105356 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" event={"ID":"313b0ee7-5304-4ad5-a676-84faabdbfdd8","Type":"ContainerStarted","Data":"6ce322703e1d6d5735bd3a43ab9c2fa0c5c9df924fb581eb5bb346f14d72ebc9"} Mar 08 02:00:03 crc kubenswrapper[4762]: I0308 02:00:03.125585 4762 generic.go:334] "Generic (PLEG): container finished" podID="86ea1a87-c3e1-46e8-9f95-7851c84feacf" containerID="599c8f46187a6df619cf6e1f30b07cc70687e653c0833f23997550e7a46ae978" exitCode=0 Mar 08 02:00:03 crc kubenswrapper[4762]: I0308 02:00:03.125835 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" event={"ID":"86ea1a87-c3e1-46e8-9f95-7851c84feacf","Type":"ContainerDied","Data":"599c8f46187a6df619cf6e1f30b07cc70687e653c0833f23997550e7a46ae978"} Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.014067 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.144903 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" event={"ID":"86ea1a87-c3e1-46e8-9f95-7851c84feacf","Type":"ContainerDied","Data":"377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d"} Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.145081 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548920-vczsq" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.145703 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377d7982a05886ec899de40d708ad3d9c4496c5c0a5b354c423cfe590533cd5d" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.222653 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume\") pod \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.222729 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8ll5\" (UniqueName: \"kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5\") pod \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.223114 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume\") pod \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\" (UID: \"86ea1a87-c3e1-46e8-9f95-7851c84feacf\") " Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.229444 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume" (OuterVolumeSpecName: "config-volume") pod "86ea1a87-c3e1-46e8-9f95-7851c84feacf" (UID: "86ea1a87-c3e1-46e8-9f95-7851c84feacf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.238319 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5" (OuterVolumeSpecName: "kube-api-access-w8ll5") pod "86ea1a87-c3e1-46e8-9f95-7851c84feacf" (UID: "86ea1a87-c3e1-46e8-9f95-7851c84feacf"). InnerVolumeSpecName "kube-api-access-w8ll5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.238652 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "86ea1a87-c3e1-46e8-9f95-7851c84feacf" (UID: "86ea1a87-c3e1-46e8-9f95-7851c84feacf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.326158 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/86ea1a87-c3e1-46e8-9f95-7851c84feacf-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.327971 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86ea1a87-c3e1-46e8-9f95-7851c84feacf-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 02:00:05 crc kubenswrapper[4762]: I0308 02:00:05.328072 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8ll5\" (UniqueName: \"kubernetes.io/projected/86ea1a87-c3e1-46e8-9f95-7851c84feacf-kube-api-access-w8ll5\") on node \"crc\" DevicePath \"\"" Mar 08 02:00:06 crc kubenswrapper[4762]: I0308 02:00:06.114099 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb"] Mar 08 02:00:06 crc kubenswrapper[4762]: I0308 02:00:06.127779 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548875-7blfb"] Mar 08 02:00:07 crc kubenswrapper[4762]: I0308 02:00:07.282959 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87a93ec2-8a4a-4bb4-9f65-1265d565d052" path="/var/lib/kubelet/pods/87a93ec2-8a4a-4bb4-9f65-1265d565d052/volumes" Mar 08 02:00:12 crc kubenswrapper[4762]: I0308 02:00:12.853265 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:00:12 crc kubenswrapper[4762]: I0308 02:00:12.854193 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:00:26 crc kubenswrapper[4762]: I0308 02:00:26.405243 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" event={"ID":"313b0ee7-5304-4ad5-a676-84faabdbfdd8","Type":"ContainerStarted","Data":"2eff4f53374ec438afb4012ff645ba6da77f98d55e375b6d61fb5b5e6f3d5850"} Mar 08 02:00:26 crc kubenswrapper[4762]: I0308 02:00:26.434866 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" podStartSLOduration=3.054128662 podStartE2EDuration="26.429597467s" podCreationTimestamp="2026-03-08 02:00:00 +0000 UTC" firstStartedPulling="2026-03-08 02:00:01.779718136 +0000 UTC m=+5823.253862480" lastFinishedPulling="2026-03-08 02:00:25.155186911 +0000 UTC m=+5846.629331285" observedRunningTime="2026-03-08 02:00:26.418729902 +0000 UTC m=+5847.892874286" watchObservedRunningTime="2026-03-08 02:00:26.429597467 +0000 UTC m=+5847.903741821" Mar 08 02:00:27 crc kubenswrapper[4762]: I0308 02:00:27.423838 4762 generic.go:334] "Generic (PLEG): container finished" podID="313b0ee7-5304-4ad5-a676-84faabdbfdd8" containerID="2eff4f53374ec438afb4012ff645ba6da77f98d55e375b6d61fb5b5e6f3d5850" exitCode=0 Mar 08 02:00:27 crc kubenswrapper[4762]: I0308 02:00:27.424018 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" event={"ID":"313b0ee7-5304-4ad5-a676-84faabdbfdd8","Type":"ContainerDied","Data":"2eff4f53374ec438afb4012ff645ba6da77f98d55e375b6d61fb5b5e6f3d5850"} Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.144087 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.256516 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2tkb\" (UniqueName: \"kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb\") pod \"313b0ee7-5304-4ad5-a676-84faabdbfdd8\" (UID: \"313b0ee7-5304-4ad5-a676-84faabdbfdd8\") " Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.285502 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb" (OuterVolumeSpecName: "kube-api-access-l2tkb") pod "313b0ee7-5304-4ad5-a676-84faabdbfdd8" (UID: "313b0ee7-5304-4ad5-a676-84faabdbfdd8"). InnerVolumeSpecName "kube-api-access-l2tkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.359746 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2tkb\" (UniqueName: \"kubernetes.io/projected/313b0ee7-5304-4ad5-a676-84faabdbfdd8-kube-api-access-l2tkb\") on node \"crc\" DevicePath \"\"" Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.808751 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" event={"ID":"313b0ee7-5304-4ad5-a676-84faabdbfdd8","Type":"ContainerDied","Data":"6ce322703e1d6d5735bd3a43ab9c2fa0c5c9df924fb581eb5bb346f14d72ebc9"} Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.808844 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce322703e1d6d5735bd3a43ab9c2fa0c5c9df924fb581eb5bb346f14d72ebc9" Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.808918 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548920-pf9rj" Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.824289 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548914-cgh7w"] Mar 08 02:00:29 crc kubenswrapper[4762]: I0308 02:00:29.836487 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548914-cgh7w"] Mar 08 02:00:31 crc kubenswrapper[4762]: I0308 02:00:31.299536 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de6847d8-05f2-4774-8643-940cc054d210" path="/var/lib/kubelet/pods/de6847d8-05f2-4774-8643-940cc054d210/volumes" Mar 08 02:00:42 crc kubenswrapper[4762]: I0308 02:00:42.852108 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:00:42 crc kubenswrapper[4762]: I0308 02:00:42.852660 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:00:47 crc kubenswrapper[4762]: I0308 02:00:47.317820 4762 scope.go:117] "RemoveContainer" containerID="d4225af5d925b0c5d7162ef844f77e5183e1f6a53de78fede3ee99b6e00cfdf0" Mar 08 02:00:47 crc kubenswrapper[4762]: I0308 02:00:47.504218 4762 scope.go:117] "RemoveContainer" containerID="923c7f782026dcf1f69cbd8aafac33085cc6b3afed33d78395234ad8cf778ffa" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.398460 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29548921-9c9cl"] Mar 08 02:01:00 crc kubenswrapper[4762]: E0308 02:01:00.407283 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86ea1a87-c3e1-46e8-9f95-7851c84feacf" containerName="collect-profiles" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.407668 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="86ea1a87-c3e1-46e8-9f95-7851c84feacf" containerName="collect-profiles" Mar 08 02:01:00 crc kubenswrapper[4762]: E0308 02:01:00.407815 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313b0ee7-5304-4ad5-a676-84faabdbfdd8" containerName="oc" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.407905 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="313b0ee7-5304-4ad5-a676-84faabdbfdd8" containerName="oc" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.411387 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="313b0ee7-5304-4ad5-a676-84faabdbfdd8" containerName="oc" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.412045 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="86ea1a87-c3e1-46e8-9f95-7851c84feacf" containerName="collect-profiles" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.436574 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.479719 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29548921-9c9cl"] Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.498593 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.498927 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.499013 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.499515 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppcls\" (UniqueName: \"kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.601513 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppcls\" (UniqueName: \"kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.601721 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.601922 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.602069 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.629491 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.630885 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppcls\" (UniqueName: \"kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.632641 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.635049 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data\") pod \"keystone-cron-29548921-9c9cl\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:00 crc kubenswrapper[4762]: I0308 02:01:00.774473 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:02 crc kubenswrapper[4762]: I0308 02:01:02.085012 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29548921-9c9cl"] Mar 08 02:01:02 crc kubenswrapper[4762]: I0308 02:01:02.146153 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548921-9c9cl" event={"ID":"4c591691-ded4-4e08-8401-18558cbaf829","Type":"ContainerStarted","Data":"d3df3e1f6955203d982eb061f6e182a75645a9333310499113b45de0e240e1ba"} Mar 08 02:01:03 crc kubenswrapper[4762]: I0308 02:01:03.158492 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548921-9c9cl" event={"ID":"4c591691-ded4-4e08-8401-18558cbaf829","Type":"ContainerStarted","Data":"b8158c19fb688ac5546cd56a42edfebaf84fd8e5e902765f5d9037e13bfea0a9"} Mar 08 02:01:03 crc kubenswrapper[4762]: I0308 02:01:03.176971 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29548921-9c9cl" podStartSLOduration=3.175987922 podStartE2EDuration="3.175987922s" podCreationTimestamp="2026-03-08 02:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 02:01:03.174659191 +0000 UTC m=+5884.648803545" watchObservedRunningTime="2026-03-08 02:01:03.175987922 +0000 UTC m=+5884.650132276" Mar 08 02:01:07 crc kubenswrapper[4762]: I0308 02:01:07.200542 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548921-9c9cl" event={"ID":"4c591691-ded4-4e08-8401-18558cbaf829","Type":"ContainerDied","Data":"b8158c19fb688ac5546cd56a42edfebaf84fd8e5e902765f5d9037e13bfea0a9"} Mar 08 02:01:07 crc kubenswrapper[4762]: I0308 02:01:07.203720 4762 generic.go:334] "Generic (PLEG): container finished" podID="4c591691-ded4-4e08-8401-18558cbaf829" containerID="b8158c19fb688ac5546cd56a42edfebaf84fd8e5e902765f5d9037e13bfea0a9" exitCode=0 Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.408120 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.523380 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppcls\" (UniqueName: \"kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls\") pod \"4c591691-ded4-4e08-8401-18558cbaf829\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.523776 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys\") pod \"4c591691-ded4-4e08-8401-18558cbaf829\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.523812 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle\") pod \"4c591691-ded4-4e08-8401-18558cbaf829\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.523911 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data\") pod \"4c591691-ded4-4e08-8401-18558cbaf829\" (UID: \"4c591691-ded4-4e08-8401-18558cbaf829\") " Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.544062 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls" (OuterVolumeSpecName: "kube-api-access-ppcls") pod "4c591691-ded4-4e08-8401-18558cbaf829" (UID: "4c591691-ded4-4e08-8401-18558cbaf829"). InnerVolumeSpecName "kube-api-access-ppcls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.550418 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4c591691-ded4-4e08-8401-18558cbaf829" (UID: "4c591691-ded4-4e08-8401-18558cbaf829"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.585519 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c591691-ded4-4e08-8401-18558cbaf829" (UID: "4c591691-ded4-4e08-8401-18558cbaf829"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.604671 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data" (OuterVolumeSpecName: "config-data") pod "4c591691-ded4-4e08-8401-18558cbaf829" (UID: "4c591691-ded4-4e08-8401-18558cbaf829"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.627475 4762 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.627509 4762 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.627521 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c591691-ded4-4e08-8401-18558cbaf829-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 02:01:09 crc kubenswrapper[4762]: I0308 02:01:09.627531 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppcls\" (UniqueName: \"kubernetes.io/projected/4c591691-ded4-4e08-8401-18558cbaf829-kube-api-access-ppcls\") on node \"crc\" DevicePath \"\"" Mar 08 02:01:10 crc kubenswrapper[4762]: I0308 02:01:10.245243 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29548921-9c9cl" event={"ID":"4c591691-ded4-4e08-8401-18558cbaf829","Type":"ContainerDied","Data":"d3df3e1f6955203d982eb061f6e182a75645a9333310499113b45de0e240e1ba"} Mar 08 02:01:10 crc kubenswrapper[4762]: I0308 02:01:10.245411 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29548921-9c9cl" Mar 08 02:01:10 crc kubenswrapper[4762]: I0308 02:01:10.246429 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3df3e1f6955203d982eb061f6e182a75645a9333310499113b45de0e240e1ba" Mar 08 02:01:12 crc kubenswrapper[4762]: I0308 02:01:12.856099 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:01:12 crc kubenswrapper[4762]: I0308 02:01:12.858883 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:01:12 crc kubenswrapper[4762]: I0308 02:01:12.858993 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 02:01:12 crc kubenswrapper[4762]: I0308 02:01:12.860173 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 02:01:12 crc kubenswrapper[4762]: I0308 02:01:12.860670 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b" gracePeriod=600 Mar 08 02:01:13 crc kubenswrapper[4762]: I0308 02:01:13.285881 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b" exitCode=0 Mar 08 02:01:13 crc kubenswrapper[4762]: I0308 02:01:13.285945 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b"} Mar 08 02:01:13 crc kubenswrapper[4762]: I0308 02:01:13.289513 4762 scope.go:117] "RemoveContainer" containerID="d3c75c51f624cb92506b9647b4201bee569295e88b12292f07f5c3ed02606e70" Mar 08 02:01:14 crc kubenswrapper[4762]: I0308 02:01:14.300489 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad"} Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.866685 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548922-b9s6j"] Mar 08 02:02:01 crc kubenswrapper[4762]: E0308 02:02:01.874886 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c591691-ded4-4e08-8401-18558cbaf829" containerName="keystone-cron" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.874920 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c591691-ded4-4e08-8401-18558cbaf829" containerName="keystone-cron" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.877935 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c591691-ded4-4e08-8401-18558cbaf829" containerName="keystone-cron" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.890490 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.906684 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.906703 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.909865 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:02:01 crc kubenswrapper[4762]: I0308 02:02:01.953645 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sczc\" (UniqueName: \"kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc\") pod \"auto-csr-approver-29548922-b9s6j\" (UID: \"e42fa3a8-143b-4850-89ce-f63ef728708a\") " pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:02 crc kubenswrapper[4762]: I0308 02:02:02.060047 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sczc\" (UniqueName: \"kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc\") pod \"auto-csr-approver-29548922-b9s6j\" (UID: \"e42fa3a8-143b-4850-89ce-f63ef728708a\") " pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:02 crc kubenswrapper[4762]: I0308 02:02:02.098400 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sczc\" (UniqueName: \"kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc\") pod \"auto-csr-approver-29548922-b9s6j\" (UID: \"e42fa3a8-143b-4850-89ce-f63ef728708a\") " pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:02 crc kubenswrapper[4762]: I0308 02:02:02.225347 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548922-b9s6j"] Mar 08 02:02:02 crc kubenswrapper[4762]: I0308 02:02:02.275348 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:04 crc kubenswrapper[4762]: I0308 02:02:04.833443 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" podUID="97490dfa-d4e5-4013-8a53-199f5872ea4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.026949 4762 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55b56f86c9-fm7md container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.038860 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podUID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.109990 4762 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-8fxrr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.110969 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podUID="7d1d5c16-4b49-4abf-8b13-0df0fda43b6a" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.110295 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.110028 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.212817 4762 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nsgkb container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.213198 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podUID="6fd90908-2008-4941-ba65-62557823e8a0" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.241188 4762 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-phxp4 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.241284 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podUID="6e603ecb-b9b1-4fba-af81-9da07c682395" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.334032 4762 patch_prober.go:28] interesting pod/console-859d87bf79-sbgvn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.130:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.334084 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.130:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.419254 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.419304 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.437772 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.437838 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.537019 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.619102 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.619094 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.632791 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.632860 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.700949 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.700952 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.701292 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.701073 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.783958 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784034 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784051 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784085 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784112 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.783970 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784402 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.784447 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.866240 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.866064 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:05 crc kubenswrapper[4762]: I0308 02:02:05.948237 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.029982 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.030009 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.070966 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.196979 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.342911 4762 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.342986 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="4c30f467-b939-4c68-91f0-707c6893e6ff" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.360981 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.361038 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.381771 4762 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.381854 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="748fb55a-dbe2-4b8b-9e08-577495a258a4" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.401945 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.483944 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.496124 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.496217 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.566133 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.566077 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.648979 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.730969 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.730996 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.731013 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.812979 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.813188 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.813233 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.813946 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.896068 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.979091 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.979111 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:06 crc kubenswrapper[4762]: I0308 02:02:06.979285 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062120 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062134 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062301 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062888 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062941 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062988 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.062883 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.357947 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" podUID="bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.428733 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.811141 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.811536 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.852983 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.853034 4762 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-nq4dh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.853049 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.853095 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" podUID="9d3224d2-e83a-4707-9e42-e13d68451af3" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:07 crc kubenswrapper[4762]: I0308 02:02:07.853001 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podUID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.083961 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.084278 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.083971 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.084338 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.226942 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podUID="4d895a55-fc09-4986-ae61-19b0c5425d15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.383472 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.383514 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.431407 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.484927 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.485009 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.484934 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.485082 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.697323 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xtp5w" podUID="ebd76fbf-3a5c-409a-9c6c-5052042a769c" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.797905 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.798168 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.798178 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:08 crc kubenswrapper[4762]: I0308 02:02:08.798247 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.077909 4762 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sxtbp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.077916 4762 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sxtbp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.078046 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" podUID="45e73cf0-17af-446f-8a92-5c45dee4ee00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.077968 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" podUID="45e73cf0-17af-446f-8a92-5c45dee4ee00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.151608 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.276932 4762 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-2fjlb container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.276972 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.276993 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" podUID="274d72c4-da34-4213-9aa4-daa52cf6668f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.277020 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.276943 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.277068 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.373930 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.374213 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.374069 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.374270 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.404795 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.404811 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.404865 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.404885 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.462028 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.462041 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.462144 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.462120 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.730954 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:09 crc kubenswrapper[4762]: I0308 02:02:09.730988 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.419785 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.419847 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.419806 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.419986 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.437641 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.437701 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.437700 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.437774 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.523296 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.718716 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:10 crc kubenswrapper[4762]: I0308 02:02:10.718795 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.477667 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.477733 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.477738 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.477909 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.479389 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.479465 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.487399 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.490232 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" containerID="cri-o://d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e" gracePeriod=30 Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.701209 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.701228 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 08 02:02:11 crc kubenswrapper[4762]: I0308 02:02:11.701721 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.371845 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.372205 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.480697 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.480817 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.563332 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.563402 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.697377 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:12 crc kubenswrapper[4762]: I0308 02:02:12.697557 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.477712 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.478154 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.580289 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.580346 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.580427 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.580496 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.669117 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" podUID="2032bfa9-398b-4802-84bc-272c70f31afb" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.669153 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" podUID="2032bfa9-398b-4802-84bc-272c70f31afb" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794068 4762 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9ntmw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.28:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794130 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podUID="3082ab77-d932-4350-915b-43172392ba8e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.28:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794248 4762 patch_prober.go:28] interesting pod/controller-manager-68c69b75b4-sfmfl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794306 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" podUID="334f5d4e-935b-42ba-b77f-2e501853fef8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794221 4762 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9ntmw container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.28:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794346 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794321 4762 patch_prober.go:28] interesting pod/controller-manager-68c69b75b4-sfmfl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794414 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" podUID="334f5d4e-935b-42ba-b77f-2e501853fef8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794434 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794431 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794462 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.794373 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podUID="3082ab77-d932-4350-915b-43172392ba8e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.28:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.802892 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.802964 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.802910 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:13 crc kubenswrapper[4762]: I0308 02:02:13.803067 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:14 crc kubenswrapper[4762]: I0308 02:02:14.808128 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" podUID="97490dfa-d4e5-4013-8a53-199f5872ea4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.067015 4762 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55b56f86c9-fm7md container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.067121 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podUID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.148976 4762 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-8fxrr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.149053 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podUID="7d1d5c16-4b49-4abf-8b13-0df0fda43b6a" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.149078 4762 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55b56f86c9-fm7md container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.149157 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podUID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.149411 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.149476 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.212987 4762 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nsgkb container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.213065 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podUID="6fd90908-2008-4941-ba65-62557823e8a0" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.241074 4762 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-phxp4 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.241138 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podUID="6e603ecb-b9b1-4fba-af81-9da07c682395" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.353001 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" podUID="20b130fa-d7f7-441a-bd96-0d5858f1ece1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.353028 4762 patch_prober.go:28] interesting pod/console-859d87bf79-sbgvn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.130:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.353321 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.130:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.452997 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.494899 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.494980 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.495048 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.495348 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.535979 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.536007 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.536104 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.579088 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.702113 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-7xslc" podUID="b83aab9a-f794-43d3-af07-0a00dac138da" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.702122 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-7xslc" podUID="b83aab9a-f794-43d3-af07-0a00dac138da" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.702223 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8szql" podUID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.705213 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-8szql" podUID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.735939 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.806932 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.815636 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="b14f9065-ffe7-430a-b9e9-f62ce942558e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.815890 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="b14f9065-ffe7-430a-b9e9-f62ce942558e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.848023 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.971881 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:15 crc kubenswrapper[4762]: I0308 02:02:15.971901 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.013906 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.013987 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.137159 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.178086 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.178164 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.219223 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.219315 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.260007 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.342632 4762 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.342698 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="4c30f467-b939-4c68-91f0-707c6893e6ff" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.375036 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.389161 4762 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.389268 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="748fb55a-dbe2-4b8b-9e08-577495a258a4" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.479915 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.480438 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.480464 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.496351 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.496392 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.609975 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.610010 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.782995 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-qgc88" podUID="0b0d938e-fbb6-4ed9-8822-c87f8ce564e3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.89:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.783064 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-qgc88" podUID="0b0d938e-fbb6-4ed9-8822-c87f8ce564e3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.89:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.901987 4762 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xpcwq container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.902059 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" podUID="ba7245b9-c69a-44e9-bbff-61213cb5a743" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.902120 4762 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xpcwq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:16 crc kubenswrapper[4762]: I0308 02:02:16.902136 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" podUID="ba7245b9-c69a-44e9-bbff-61213cb5a743" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.358952 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" podUID="bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.399971 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" podUID="bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.701204 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.701215 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-78hq2" podUID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.701173 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-78hq2" podUID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.811966 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.812241 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.894899 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.894964 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.894970 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podUID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.895748 4762 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-nq4dh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.895897 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podUID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:17 crc kubenswrapper[4762]: I0308 02:02:17.896099 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" podUID="9d3224d2-e83a-4707-9e42-e13d68451af3" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.002610 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.002627 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.004087 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.004098 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.267994 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podUID="4d895a55-fc09-4986-ae61-19b0c5425d15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.268115 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podUID="4d895a55-fc09-4986-ae61-19b0c5425d15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.383110 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.383513 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.702442 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-rfbxq" podUID="0870b34f-2648-451a-a34e-8555e4e4982a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.702531 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-rfbxq" podUID="0870b34f-2648-451a-a34e-8555e4e4982a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.796966 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.797357 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.797566 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:18 crc kubenswrapper[4762]: I0308 02:02:18.797717 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.077241 4762 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sxtbp container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.077240 4762 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-sxtbp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.077326 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" podUID="45e73cf0-17af-446f-8a92-5c45dee4ee00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.077394 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-sxtbp" podUID="45e73cf0-17af-446f-8a92-5c45dee4ee00" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.70:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.131962 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.316980 4762 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-2fjlb container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.316996 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317076 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" podUID="274d72c4-da34-4213-9aa4-daa52cf6668f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317138 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317265 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317301 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317651 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.317711 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.372578 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.372627 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.372541 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.373085 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.404237 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.404587 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.404266 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.404930 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.462325 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.462399 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.462407 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.462454 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.477579 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.477626 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.698044 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xtp5w" podUID="ebd76fbf-3a5c-409a-9c6c-5052042a769c" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.731040 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:19 crc kubenswrapper[4762]: I0308 02:02:19.731127 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.420170 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.420584 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.420205 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.420649 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.463859 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.463951 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.464039 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.464065 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.696028 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:20 crc kubenswrapper[4762]: I0308 02:02:20.697007 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.371946 4762 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.372380 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.372008 4762 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.372465 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.561925 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.41:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.696574 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.698976 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.702144 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.702148 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" podUID="e6f987e6-c9d7-410e-9401-492e35771592" containerName="nbdb" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.703161 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" podUID="e6f987e6-c9d7-410e-9401-492e35771592" containerName="sbdb" probeResult="failure" output="command timed out" Mar 08 02:02:21 crc kubenswrapper[4762]: I0308 02:02:21.704581 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.326596 4762 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-h5bzn container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.327113 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-h5bzn" podUID="460afccf-5d2c-44d9-813e-41c06be89ab7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.372237 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.372346 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.478264 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.478354 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.560632 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.560716 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:22 crc kubenswrapper[4762]: I0308 02:02:22.622951 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.384134 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.384302 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.580058 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.580356 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.580071 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.580405 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.628081 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" podUID="2032bfa9-398b-4802-84bc-272c70f31afb" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.703803 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.712251 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.716242 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.734102 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"5a6d8fac6a6e64a0518c0702f5a28f4dae3953a5d0f5c4c20afa6d40f8071d06"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.736110 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerName="ceilometer-central-agent" containerID="cri-o://5a6d8fac6a6e64a0518c0702f5a28f4dae3953a5d0f5c4c20afa6d40f8071d06" gracePeriod=30 Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.752983 4762 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-9ntmw container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.28:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.753057 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-9ntmw" podUID="3082ab77-d932-4350-915b-43172392ba8e" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.28:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794134 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="65897654-e519-4a6a-9557-2344198bc5cd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.117:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794402 4762 patch_prober.go:28] interesting pod/controller-manager-68c69b75b4-sfmfl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794461 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" podUID="334f5d4e-935b-42ba-b77f-2e501853fef8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794510 4762 patch_prober.go:28] interesting pod/controller-manager-68c69b75b4-sfmfl container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794556 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-68c69b75b4-sfmfl" podUID="334f5d4e-935b-42ba-b77f-2e501853fef8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794612 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794671 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794689 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.794682 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.802604 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.802669 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.802607 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:23 crc kubenswrapper[4762]: I0308 02:02:23.802732 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.150013 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.150037 4762 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55b56f86c9-fm7md container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": read tcp 10.217.0.2:46386->10.217.0.50:8081: read: connection reset by peer" start-of-body= Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.150131 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.150144 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podUID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": read tcp 10.217.0.2:46386->10.217.0.50:8081: read: connection reset by peer" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.150749 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.151659 4762 patch_prober.go:28] interesting pod/loki-operator-controller-manager-55b56f86c9-fm7md container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.50:8081/readyz\": dial tcp 10.217.0.50:8081: connect: connection refused" start-of-body= Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.151700 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" podUID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.50:8081/readyz\": dial tcp 10.217.0.50:8081: connect: connection refused" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.374885 4762 patch_prober.go:28] interesting pod/console-859d87bf79-sbgvn container/console namespace/openshift-console: Liveness probe status=failure output="Get \"https://10.217.0.130:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.374957 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.130:8443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.375021 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.376198 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"cc56921b87d3b48c9754bb0e5a8c075000c33d418a4763fd63d81afdbf0f1207"} pod="openshift-console/console-859d87bf79-sbgvn" containerMessage="Container console failed liveness probe, will be restarted" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.696786 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xtp5w" podUID="ebd76fbf-3a5c-409a-9c6c-5052042a769c" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.808914 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" podUID="97490dfa-d4e5-4013-8a53-199f5872ea4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:24 crc kubenswrapper[4762]: I0308 02:02:24.809011 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.092081 4762 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-8fxrr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.092612 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podUID="7d1d5c16-4b49-4abf-8b13-0df0fda43b6a" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.092866 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.093583 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.093694 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.093946 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.094053 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.095747 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548"} pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" containerMessage="Container webhook-server failed liveness probe, will be restarted" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.095816 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" containerID="cri-o://062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548" gracePeriod=2 Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.191614 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" event={"ID":"b242b134-d2b7-4e03-a6c1-cd046de89c3d","Type":"ContainerDied","Data":"fb7f58da6b70b80b72db9aa10575196d3436636464686918b3cb6f90ff8e8e84"} Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.192079 4762 generic.go:334] "Generic (PLEG): container finished" podID="b242b134-d2b7-4e03-a6c1-cd046de89c3d" containerID="fb7f58da6b70b80b72db9aa10575196d3436636464686918b3cb6f90ff8e8e84" exitCode=1 Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.196691 4762 scope.go:117] "RemoveContainer" containerID="fb7f58da6b70b80b72db9aa10575196d3436636464686918b3cb6f90ff8e8e84" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.214123 4762 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nsgkb container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.214191 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podUID="6fd90908-2008-4941-ba65-62557823e8a0" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.214288 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.240388 4762 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-phxp4 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.240461 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podUID="6e603ecb-b9b1-4fba-af81-9da07c682395" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.247026 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 02:02:25 crc kubenswrapper[4762]: E0308 02:02:25.388028 4762 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.393519 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" podUID="20b130fa-d7f7-441a-bd96-0d5858f1ece1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.395261 4762 patch_prober.go:28] interesting pod/console-859d87bf79-sbgvn container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.130:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.395325 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.130:8443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.395409 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.395842 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" podUID="20b130fa-d7f7-441a-bd96-0d5858f1ece1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.96:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.420466 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.420524 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": context deadline exceeded" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.438243 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.438326 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.562987 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.604016 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.645992 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646030 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646100 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646092 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646153 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646159 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.646175 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.686966 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.687126 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.699342 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-8szql" podUID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.700506 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-8szql" podUID="c578d6b5-daa2-4fd3-88ee-29ab82caaa5a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.700901 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-7xslc" podUID="b83aab9a-f794-43d3-af07-0a00dac138da" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.701979 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-7xslc" podUID="b83aab9a-f794-43d3-af07-0a00dac138da" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.811187 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.811179 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.811314 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.811362 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.811430 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.853052 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.853131 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.934973 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.935336 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.934991 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:25 crc kubenswrapper[4762]: I0308 02:02:25.935689 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.058081 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" podUID="97490dfa-d4e5-4013-8a53-199f5872ea4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.098957 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.099020 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="b14f9065-ffe7-430a-b9e9-f62ce942558e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.099063 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.099090 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="b14f9065-ffe7-430a-b9e9-f62ce942558e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.1.2:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.099293 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.099482 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.420338 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.420701 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.427972 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.428167 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.437157 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.437251 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.552039 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.552836 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.634620 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.634735 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.704211 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-ffhbt" podUID="21774b04-29d4-4687-b650-87eed791f3e8" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.704339 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-ffhbt" podUID="21774b04-29d4-4687-b650-87eed791f3e8" containerName="ovsdb-server" probeResult="failure" output="command timed out" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.715911 4762 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-8fxrr container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.715957 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podUID="7d1d5c16-4b49-4abf-8b13-0df0fda43b6a" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.715965 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.715993 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.716109 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841606 4762 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-phxp4 container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841701 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podUID="6e603ecb-b9b1-4fba-af81-9da07c682395" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841781 4762 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841846 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="748fb55a-dbe2-4b8b-9e08-577495a258a4" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841964 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842146 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842172 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842238 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842236 4762 patch_prober.go:28] interesting pod/logging-loki-distributor-5d5548c9f5-8fxrr container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842325 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" podUID="7d1d5c16-4b49-4abf-8b13-0df0fda43b6a" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.54:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842395 4762 patch_prober.go:28] interesting pod/logging-loki-query-frontend-6d6859c548-phxp4 container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842429 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" podUID="6e603ecb-b9b1-4fba-af81-9da07c682395" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.56:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842480 4762 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842500 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="4c30f467-b939-4c68-91f0-707c6893e6ff" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842552 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842693 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.842841 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.843159 4762 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nsgkb container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.843220 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podUID="6fd90908-2008-4941-ba65-62557823e8a0" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/loki/api/v1/status/buildinfo\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.843290 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.841622 4762 patch_prober.go:28] interesting pod/logging-loki-querier-76bf7b6d45-nsgkb container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.843386 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" podUID="6fd90908-2008-4941-ba65-62557823e8a0" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.55:3101/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.923473 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.923585 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.923596 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:26 crc kubenswrapper[4762]: I0308 02:02:26.924116 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.004904 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.004921 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.004964 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.004900 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.004934 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.58:8081/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.006645 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/live\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.005034 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.006749 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.086914 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.086987 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.087250 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.087521 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.168970 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.169077 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-dh78h" podUID="1906010e-f253-4d33-8e97-96d8860c3ff6" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.169362 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.245383 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"8078bc31cae92050783ac2dd468d11b53c5b3670e54aad31fe27ca96d77a0828"} pod="metallb-system/frr-k8s-4qgst" containerMessage="Container controller failed liveness probe, will be restarted" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.245699 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"01233fc768238da4d221288098b6de0ccdbb6b9b8f604c4f036df7b0d4542736"} pod="metallb-system/frr-k8s-4qgst" containerMessage="Container frr failed liveness probe, will be restarted" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.246662 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="frr" containerID="cri-o://01233fc768238da4d221288098b6de0ccdbb6b9b8f604c4f036df7b0d4542736" gracePeriod=2 Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.252863 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.253226 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.253286 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-86ddb6bd46-qgc88" podUID="0b0d938e-fbb6-4ed9-8822-c87f8ce564e3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.89:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.253368 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 02:02:27 crc kubenswrapper[4762]: E0308 02:02:27.290142 4762 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcbdc8d75_414a_451a_b594_dc430abfcc09.slice/crio-062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548.scope\": RecentStats: unable to find data in memory cache]" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333094 4762 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xpcwq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333137 4762 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-xpcwq container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333221 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" podUID="ba7245b9-c69a-44e9-bbff-61213cb5a743" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333249 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-xpcwq" podUID="ba7245b9-c69a-44e9-bbff-61213cb5a743" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.73:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333304 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" podUID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.86:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333478 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333552 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.333640 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.342838 4762 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.342875 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-compactor-0" podUID="4c30f467-b939-4c68-91f0-707c6893e6ff" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376266 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376350 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376420 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376639 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376705 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376865 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.376931 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.377038 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.381484 4762 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.62:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.381524 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="748fb55a-dbe2-4b8b-9e08-577495a258a4" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.417221 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.417303 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.417503 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-86ddb6bd46-qgc88" podUID="0b0d938e-fbb6-4ed9-8822-c87f8ce564e3" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.89:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.417626 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" podUID="625fe5b5-181a-47db-8656-00c8f5fc045f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.98:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.418033 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" podUID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.100:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.458964 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" podUID="60096a41-cef5-4818-a549-96b51b04cd8f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.97:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.494877 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.494932 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.541085 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" podUID="bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.541190 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.541465 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" podUID="d5f0be01-26e9-4c4e-8122-61659529e505" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.541771 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" podUID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.541914 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" podUID="e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.99:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.542061 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" podUID="ead6b665-cd0f-475a-a71b-33fd36246484" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.595037 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.675935 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" podUID="7f6a4543-a300-4393-93e0-fcfeae3ccd61" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.699250 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-q6mt4" podUID="e6f987e6-c9d7-410e-9401-492e35771592" containerName="ovnkube-controller" probeResult="failure" output="command timed out" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.701835 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Mar 08 02:02:27 crc kubenswrapper[4762]: E0308 02:02:27.830617 4762 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"crc\": the object has been modified; please apply your changes to the latest version and try again" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.836255 4762 status_manager.go:875] "Failed to update status for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a1b71198-134e-4cec-9f0b-b28979adf785\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-08T02:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-08T02:02:11Z\\\",\\\"message\\\":\\\"containers with unready status: [openshift-config-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openshift-config-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-08T00:25:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"serving-cert\\\"},{\\\"mountPath\\\":\\\"/available-featuregates\\\",\\\"name\\\":\\\"available-featuregates\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njxsf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-config-operator\"/\"openshift-config-operator-7777fb866f-t95jr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=8s\": context deadline exceeded" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.839962 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.840009 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.840092 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.843453 4762 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.843611 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="4c30f467-b939-4c68-91f0-707c6893e6ff" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.61:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.847099 4762 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.847139 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="306f3a2d-d090-4aad-b84c-05078f5f8be5" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.847113 4762 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.847181 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="748fb55a-dbe2-4b8b-9e08-577495a258a4" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.62:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882042 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podUID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882159 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" podUID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882196 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882338 4762 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-nq4dh container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882373 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" podUID="9d3224d2-e83a-4707-9e42-e13d68451af3" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882401 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882418 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882466 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.882515 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.883718 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"8e4409c5d05bd5ef69554e617fce93f714a858353544ff940d7a031d6aa03879"} pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.883782 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" podUID="9d3224d2-e83a-4707-9e42-e13d68451af3" containerName="authentication-operator" containerID="cri-o://8e4409c5d05bd5ef69554e617fce93f714a858353544ff940d7a031d6aa03879" gracePeriod=30 Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.923994 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" podUID="ac0364ec-ad05-431d-b2f4-c92353f15f4c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:27 crc kubenswrapper[4762]: I0308 02:02:27.964987 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" podUID="da66283d-dd88-4e6a-a4ad-496064bc8a78" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.002037 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.002287 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.002346 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.001988 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.003148 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.003220 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.004668 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"1eeb7a66256d8f33ea3c1dcc9616f543e7716c382ff61df3a4c68bded260c3ce"} pod="openshift-console-operator/console-operator-58897d9998-tw6wd" containerMessage="Container console-operator failed liveness probe, will be restarted" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.004735 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" containerID="cri-o://1eeb7a66256d8f33ea3c1dcc9616f543e7716c382ff61df3a4c68bded260c3ce" gracePeriod=30 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.090000 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" podUID="05d1f89d-b2b2-48ff-8555-e9f68ac3300a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.090009 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" podUID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.230384 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podUID="4d895a55-fc09-4986-ae61-19b0c5425d15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.230978 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.257657 4762 generic.go:334] "Generic (PLEG): container finished" podID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerID="01233fc768238da4d221288098b6de0ccdbb6b9b8f604c4f036df7b0d4542736" exitCode=143 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.257738 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerDied","Data":"01233fc768238da4d221288098b6de0ccdbb6b9b8f604c4f036df7b0d4542736"} Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.260820 4762 generic.go:334] "Generic (PLEG): container finished" podID="cbdc8d75-414a-451a-b594-dc430abfcc09" containerID="062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548" exitCode=137 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.260898 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" event={"ID":"cbdc8d75-414a-451a-b594-dc430abfcc09","Type":"ContainerDied","Data":"062367de006e2becd4037ea869e3878fd1ca13373430f1c3b639501ad9459548"} Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.262207 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"89195bfcdaac8b8b5ad1df1c8fdb99747829a38dd538d179e0fa6390b90dfa72"} pod="openshift-console/downloads-7954f5f757-84dbj" containerMessage="Container download-server failed liveness probe, will be restarted" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.262263 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" containerID="cri-o://89195bfcdaac8b8b5ad1df1c8fdb99747829a38dd538d179e0fa6390b90dfa72" gracePeriod=2 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.262450 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"0d392033a642a70fd59abb326568a8384adda73208a797664556c635ffc1ff46"} pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.262495 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" containerID="cri-o://0d392033a642a70fd59abb326568a8384adda73208a797664556c635ffc1ff46" gracePeriod=10 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.296960 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" podUID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.342939 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" containerID="cri-o://8078bc31cae92050783ac2dd468d11b53c5b3670e54aad31fe27ca96d77a0828" gracePeriod=2 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.376971 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" podUID="f82c21a8-e080-4d70-b898-8c15a7b71989" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464158 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464666 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464181 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" podUID="1bc55675-0793-4489-b05d-03581df96527" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464797 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" podUID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464254 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.464986 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.465062 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-dtdxk" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.477862 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.477911 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.637063 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-4qgst" podUID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.700042 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-rfbxq" podUID="0870b34f-2648-451a-a34e-8555e4e4982a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.700592 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-7588678759-6jpjt" podUID="9976fcf2-7f49-45af-afe2-d5c3e07f2cac" containerName="heat-engine" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.701299 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-rfbxq" podUID="0870b34f-2648-451a-a34e-8555e4e4982a" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.701630 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-7588678759-6jpjt" podUID="9976fcf2-7f49-45af-afe2-d5c3e07f2cac" containerName="heat-engine" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.701686 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-78hq2" podUID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.702406 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-78hq2" podUID="4f3c2509-9848-4e76-96ae-8f815f66d6d7" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800046 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800098 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800175 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800811 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800871 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800920 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800962 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"20dc728d30786c61357663c07671e3894813f2935a0a4dc5797eb9ba02b16e98"} pod="openshift-ingress/router-default-5444994796-kn22k" containerMessage="Container router failed liveness probe, will be restarted" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.800999 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" containerID="cri-o://20dc728d30786c61357663c07671e3894813f2935a0a4dc5797eb9ba02b16e98" gracePeriod=10 Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.882968 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.883134 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:28 crc kubenswrapper[4762]: I0308 02:02:28.925158 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" podUID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.108945 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.109467 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278020 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278078 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278156 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278185 4762 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-2fjlb container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278229 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278270 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278232 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" podUID="274d72c4-da34-4213-9aa4-daa52cf6668f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278303 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.278426 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.280675 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"f9363b8fc8ac34db5baa8b6349034b057f9b994a6c568d3a4bc373a0a2ec92a9"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.280722 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" containerID="cri-o://f9363b8fc8ac34db5baa8b6349034b057f9b994a6c568d3a4bc373a0a2ec92a9" gracePeriod=30 Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.319037 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" podUID="4d895a55-fc09-4986-ae61-19b0c5425d15" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.360039 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.360114 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.372641 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.372678 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.372713 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.372793 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.372890 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.373129 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.373929 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"e13cb7aace6e15528746f589d4d72d35b027591ec9f937bb15e317f6a9fef24c"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" containerMessage="Container olm-operator failed liveness probe, will be restarted" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.373989 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" containerID="cri-o://e13cb7aace6e15528746f589d4d72d35b027591ec9f937bb15e317f6a9fef24c" gracePeriod=30 Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404205 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404269 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404210 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404356 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404374 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.404414 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.405942 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"4ad0e6773b6f050fa791a97c0d45e88d00db2bb3edd49f357d7eb328a375cb52"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" containerMessage="Container packageserver failed liveness probe, will be restarted" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.405988 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" containerID="cri-o://4ad0e6773b6f050fa791a97c0d45e88d00db2bb3edd49f357d7eb328a375cb52" gracePeriod=30 Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462476 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462540 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462555 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462589 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462622 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.462722 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.505889 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" podUID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.88:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.561927 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.41:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.697654 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-xtp5w" podUID="ebd76fbf-3a5c-409a-9c6c-5052042a769c" containerName="nmstate-handler" probeResult="failure" output="command timed out" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.698031 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.730972 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.731076 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-4j4bt" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.730974 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.731309 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4j4bt" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.732780 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"cb5db0e89275aa1ffd58ba8eff7debeec9669527780fa26451e57521ada330fc"} pod="metallb-system/speaker-4j4bt" containerMessage="Container speaker failed liveness probe, will be restarted" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.732855 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-4j4bt" podUID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerName="speaker" containerID="cri-o://cb5db0e89275aa1ffd58ba8eff7debeec9669527780fa26451e57521ada330fc" gracePeriod=2 Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.844062 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.844098 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.926085 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:29 crc kubenswrapper[4762]: I0308 02:02:29.926152 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.152137 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" podUID="9056b43f-9cc2-446b-a516-04ba97bf2fd0" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.46:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.278967 4762 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-2fjlb container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.279360 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" podUID="274d72c4-da34-4213-9aa4-daa52cf6668f" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.80:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.287986 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" event={"ID":"b242b134-d2b7-4e03-a6c1-cd046de89c3d","Type":"ContainerStarted","Data":"4d3eac3b0666b776cf932ed923e457cf2d6e0d7ed93fd66838ec0186b9ac3666"} Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.288751 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.288935 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"140a56ad7d710cb74722b3bd1b443cb1f947abb10cd84309e956a26cad1aae5f"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.288987 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" containerID="cri-o://140a56ad7d710cb74722b3bd1b443cb1f947abb10cd84309e956a26cad1aae5f" gracePeriod=30 Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.320082 4762 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-nh2q6 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.320178 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" podUID="04980224-fe82-485b-83f9-9c3d30b196db" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.405693 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.405794 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.420164 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.420242 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-vq8xm container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.420327 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.57:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.420289 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-vq8xm" podUID="1efe4203-538b-41b7-9e52-832aeceaac3b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.57:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.424093 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xtp5w" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.435523 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4j4bt" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.437554 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.437620 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.58:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.437640 4762 patch_prober.go:28] interesting pod/logging-loki-gateway-58595d78f8-lmbn4 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.437729 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-58595d78f8-lmbn4" podUID="3be01762-1f06-4534-8426-ab3b41e8e8d8" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.58:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.696614 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.696716 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.696827 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.696889 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.698029 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.755531 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.756029 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.755658 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" start-of-body= Mar 08 02:02:30 crc kubenswrapper[4762]: I0308 02:02:30.756165 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": unexpected EOF" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.288141 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.288557 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.301304 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" event={"ID":"cbdc8d75-414a-451a-b594-dc430abfcc09","Type":"ContainerStarted","Data":"1b90432ae435286aec51fd00d61623d5b9df1391723e5d867740fec99e310b5f"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.301413 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.327097 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-tw6wd_1d484943-583d-493a-ab04-bf99847ff4c4/console-operator/0.log" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.327176 4762 generic.go:334] "Generic (PLEG): container finished" podID="1d484943-583d-493a-ab04-bf99847ff4c4" containerID="1eeb7a66256d8f33ea3c1dcc9616f543e7716c382ff61df3a4c68bded260c3ce" exitCode=1 Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.327269 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" event={"ID":"1d484943-583d-493a-ab04-bf99847ff4c4","Type":"ContainerDied","Data":"1eeb7a66256d8f33ea3c1dcc9616f543e7716c382ff61df3a4c68bded260c3ce"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.332336 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-84dbj_6e8e8070-7d3f-4a58-b1ce-6f240bb0170d/download-server/0.log" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.332401 4762 generic.go:334] "Generic (PLEG): container finished" podID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerID="89195bfcdaac8b8b5ad1df1c8fdb99747829a38dd538d179e0fa6390b90dfa72" exitCode=137 Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.332489 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-84dbj" event={"ID":"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d","Type":"ContainerDied","Data":"89195bfcdaac8b8b5ad1df1c8fdb99747829a38dd538d179e0fa6390b90dfa72"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.334867 4762 generic.go:334] "Generic (PLEG): container finished" podID="9d3224d2-e83a-4707-9e42-e13d68451af3" containerID="8e4409c5d05bd5ef69554e617fce93f714a858353544ff940d7a031d6aa03879" exitCode=0 Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.334919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" event={"ID":"9d3224d2-e83a-4707-9e42-e13d68451af3","Type":"ContainerDied","Data":"8e4409c5d05bd5ef69554e617fce93f714a858353544ff940d7a031d6aa03879"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.343959 4762 generic.go:334] "Generic (PLEG): container finished" podID="35f236f0-d58d-4bb2-a6cd-689097c3fbf4" containerID="8078bc31cae92050783ac2dd468d11b53c5b3670e54aad31fe27ca96d77a0828" exitCode=137 Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.344073 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerDied","Data":"8078bc31cae92050783ac2dd468d11b53c5b3670e54aad31fe27ca96d77a0828"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.346748 4762 generic.go:334] "Generic (PLEG): container finished" podID="97490dfa-d4e5-4013-8a53-199f5872ea4c" containerID="c98b9456d1cc1989c81e5979ad16f1d9dbd5bd434866f5818483f357d8933810" exitCode=1 Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.346950 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" event={"ID":"97490dfa-d4e5-4013-8a53-199f5872ea4c","Type":"ContainerDied","Data":"c98b9456d1cc1989c81e5979ad16f1d9dbd5bd434866f5818483f357d8933810"} Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.349288 4762 scope.go:117] "RemoveContainer" containerID="c98b9456d1cc1989c81e5979ad16f1d9dbd5bd434866f5818483f357d8933810" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.421522 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-k687p" podUID="b2dce5bf-2a64-44af-bfe2-0a15fd5d357d" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.466354 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8103d22d-043e-4af1-a19d-307905e2a05f" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.159:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.478191 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.478278 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.697520 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.697839 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.698099 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.698134 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.698158 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.699085 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"ab2d98bfe519b063c5ee46d89c20f223f11a485732e05ac73d9868a4128c5e19"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.700272 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.700322 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.700916 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output="command timed out" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.700959 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.701074 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e"} pod="openstack-operators/openstack-operator-index-xtl98" containerMessage="Container registry-server failed liveness probe, will be restarted" Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.701103 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" containerID="cri-o://6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e" gracePeriod=30 Mar 08 02:02:31 crc kubenswrapper[4762]: E0308 02:02:31.715197 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 02:02:31 crc kubenswrapper[4762]: I0308 02:02:31.732747 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 08 02:02:31 crc kubenswrapper[4762]: E0308 02:02:31.735061 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 02:02:31 crc kubenswrapper[4762]: E0308 02:02:31.736716 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e" cmd=["grpc_health_probe","-addr=:50051"] Mar 08 02:02:31 crc kubenswrapper[4762]: E0308 02:02:31.736779 4762 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.250324 4762 trace.go:236] Trace[1041379360]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-xpcwq" (08-Mar-2026 02:02:19.662) (total time: 12581ms): Mar 08 02:02:32 crc kubenswrapper[4762]: Trace[1041379360]: [12.581559251s] [12.581559251s] END Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.356470 4762 generic.go:334] "Generic (PLEG): container finished" podID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerID="4ad0e6773b6f050fa791a97c0d45e88d00db2bb3edd49f357d7eb328a375cb52" exitCode=0 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.356560 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" event={"ID":"7a27cd53-cc43-4227-a15a-d55e0bfaf81d","Type":"ContainerDied","Data":"4ad0e6773b6f050fa791a97c0d45e88d00db2bb3edd49f357d7eb328a375cb52"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.359329 4762 generic.go:334] "Generic (PLEG): container finished" podID="3cafb56e-d1ea-48b5-9b1c-691e86cba0d9" containerID="cb5db0e89275aa1ffd58ba8eff7debeec9669527780fa26451e57521ada330fc" exitCode=0 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.359381 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4j4bt" event={"ID":"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9","Type":"ContainerDied","Data":"cb5db0e89275aa1ffd58ba8eff7debeec9669527780fa26451e57521ada330fc"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.360865 4762 generic.go:334] "Generic (PLEG): container finished" podID="6b30a18d-93d3-48de-9b32-7c2326e04220" containerID="f858113b55891e2c17afbc2db41c811701ed452d2b386dfb2a62beb9b43a4dff" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.360915 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" event={"ID":"6b30a18d-93d3-48de-9b32-7c2326e04220","Type":"ContainerDied","Data":"f858113b55891e2c17afbc2db41c811701ed452d2b386dfb2a62beb9b43a4dff"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.361746 4762 scope.go:117] "RemoveContainer" containerID="f858113b55891e2c17afbc2db41c811701ed452d2b386dfb2a62beb9b43a4dff" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.362250 4762 generic.go:334] "Generic (PLEG): container finished" podID="2032bfa9-398b-4802-84bc-272c70f31afb" containerID="61f209625b5696ff4e2b1c69f6a95d86fd34e1db9bf5cc9b8ece7b4746f58686" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.362319 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" event={"ID":"2032bfa9-398b-4802-84bc-272c70f31afb","Type":"ContainerDied","Data":"61f209625b5696ff4e2b1c69f6a95d86fd34e1db9bf5cc9b8ece7b4746f58686"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.363680 4762 scope.go:117] "RemoveContainer" containerID="61f209625b5696ff4e2b1c69f6a95d86fd34e1db9bf5cc9b8ece7b4746f58686" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.365243 4762 generic.go:334] "Generic (PLEG): container finished" podID="8fc55d76-cb72-4ac9-b132-24b997e298a3" containerID="1d211df9928790d0448bc1bbf2b5df004becd2a552faa846eb44cec7a37ebd49" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.365296 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" event={"ID":"8fc55d76-cb72-4ac9-b132-24b997e298a3","Type":"ContainerDied","Data":"1d211df9928790d0448bc1bbf2b5df004becd2a552faa846eb44cec7a37ebd49"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.365622 4762 scope.go:117] "RemoveContainer" containerID="1d211df9928790d0448bc1bbf2b5df004becd2a552faa846eb44cec7a37ebd49" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.369666 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.372350 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.372384 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.372463 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.373544 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.376037 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.376164 4762 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="26ae1944ffd2b77dcc8f996410ea88ed9aec65c78681e51eaed7801dd5610c9f" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.376222 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"26ae1944ffd2b77dcc8f996410ea88ed9aec65c78681e51eaed7801dd5610c9f"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.376264 4762 scope.go:117] "RemoveContainer" containerID="ebb1494dfbc794f80c24a6246263105b532b4d319089fce3f30482da0af2f4c0" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.377998 4762 scope.go:117] "RemoveContainer" containerID="26ae1944ffd2b77dcc8f996410ea88ed9aec65c78681e51eaed7801dd5610c9f" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.378837 4762 generic.go:334] "Generic (PLEG): container finished" podID="d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3" containerID="a5bfcd2e020af424f26fdb487fb70a6a9480ba03790fbb4d2900fc010885abe5" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.378892 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" event={"ID":"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3","Type":"ContainerDied","Data":"a5bfcd2e020af424f26fdb487fb70a6a9480ba03790fbb4d2900fc010885abe5"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.379348 4762 scope.go:117] "RemoveContainer" containerID="a5bfcd2e020af424f26fdb487fb70a6a9480ba03790fbb4d2900fc010885abe5" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.382685 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-tw6wd_1d484943-583d-493a-ab04-bf99847ff4c4/console-operator/0.log" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.382749 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" event={"ID":"1d484943-583d-493a-ab04-bf99847ff4c4","Type":"ContainerStarted","Data":"2024ce51311f5b82f064a9813826a28436e2957a0427c0c897e3f6d13e87a83f"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.383569 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.383918 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.383958 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.385444 4762 generic.go:334] "Generic (PLEG): container finished" podID="8e8be3de-e055-441d-bfff-7b966b35dc15" containerID="28860753954b934f1d2cab67faa8722bec91c8b32b2b3718e4fd528a1f71283a" exitCode=1 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.385494 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" event={"ID":"8e8be3de-e055-441d-bfff-7b966b35dc15","Type":"ContainerDied","Data":"28860753954b934f1d2cab67faa8722bec91c8b32b2b3718e4fd528a1f71283a"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.386516 4762 scope.go:117] "RemoveContainer" containerID="28860753954b934f1d2cab67faa8722bec91c8b32b2b3718e4fd528a1f71283a" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.389935 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nq4dh" event={"ID":"9d3224d2-e83a-4707-9e42-e13d68451af3","Type":"ContainerStarted","Data":"a828d5d8ce11ae67ba542b49b7f1aea2409ac94e4cb1a8fc33cf21bf1398fb75"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.400602 4762 generic.go:334] "Generic (PLEG): container finished" podID="7a1f5442-2f22-4dff-b59a-0a8233a83b41" containerID="0d392033a642a70fd59abb326568a8384adda73208a797664556c635ffc1ff46" exitCode=0 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.400657 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" event={"ID":"7a1f5442-2f22-4dff-b59a-0a8233a83b41","Type":"ContainerDied","Data":"0d392033a642a70fd59abb326568a8384adda73208a797664556c635ffc1ff46"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.402298 4762 generic.go:334] "Generic (PLEG): container finished" podID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerID="e13cb7aace6e15528746f589d4d72d35b027591ec9f937bb15e317f6a9fef24c" exitCode=0 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.402351 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" event={"ID":"741e90e6-8de3-4054-94cf-7ada0da0e454","Type":"ContainerDied","Data":"e13cb7aace6e15528746f589d4d72d35b027591ec9f937bb15e317f6a9fef24c"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.408726 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-7954f5f757-84dbj_6e8e8070-7d3f-4a58-b1ce-6f240bb0170d/download-server/0.log" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.408896 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-84dbj" event={"ID":"6e8e8070-7d3f-4a58-b1ce-6f240bb0170d","Type":"ContainerStarted","Data":"6660d1ff6702a186a451fb7646e4db6bac460579a2fae0cded84c614621fb248"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.409472 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.409639 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.409695 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.441175 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"437757f73e0abed408714eb1d41ffeed57ad3fc1f5a084d290f81834e9367960"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.444422 4762 generic.go:334] "Generic (PLEG): container finished" podID="4de5942e-acf8-4138-acc3-42c177a7f997" containerID="140a56ad7d710cb74722b3bd1b443cb1f947abb10cd84309e956a26cad1aae5f" exitCode=0 Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.445040 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" event={"ID":"4de5942e-acf8-4138-acc3-42c177a7f997","Type":"ContainerDied","Data":"140a56ad7d710cb74722b3bd1b443cb1f947abb10cd84309e956a26cad1aae5f"} Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.561340 4762 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.561403 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.561443 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.586597 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.586635 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.618881 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 02:02:32 crc kubenswrapper[4762]: I0308 02:02:32.698606 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" probeResult="failure" output="command timed out" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.069126 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-8pp92" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.457984 4762 generic.go:334] "Generic (PLEG): container finished" podID="20b130fa-d7f7-441a-bd96-0d5858f1ece1" containerID="fd1c28de8738fca4a59a122e825168621ce76c992957cc70cf9ecd2772eadcf1" exitCode=1 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.458027 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" event={"ID":"20b130fa-d7f7-441a-bd96-0d5858f1ece1","Type":"ContainerDied","Data":"fd1c28de8738fca4a59a122e825168621ce76c992957cc70cf9ecd2772eadcf1"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.459451 4762 scope.go:117] "RemoveContainer" containerID="fd1c28de8738fca4a59a122e825168621ce76c992957cc70cf9ecd2772eadcf1" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.464547 4762 generic.go:334] "Generic (PLEG): container finished" podID="a1b71198-134e-4cec-9f0b-b28979adf785" containerID="d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e" exitCode=0 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.464604 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" event={"ID":"a1b71198-134e-4cec-9f0b-b28979adf785","Type":"ContainerDied","Data":"d847f33cd40fb03b71a9e6df743c36e9d4b16087f3dcdcb9a0edb416e0cf424e"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.468118 4762 generic.go:334] "Generic (PLEG): container finished" podID="2352d4f2-aadc-4ad7-806e-9324d3be5116" containerID="54e87d48443a9e2100f7b0208291f5744ffbafd738aa53f7acdb15b2492d5c52" exitCode=1 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.468167 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" event={"ID":"2352d4f2-aadc-4ad7-806e-9324d3be5116","Type":"ContainerDied","Data":"54e87d48443a9e2100f7b0208291f5744ffbafd738aa53f7acdb15b2492d5c52"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.468548 4762 scope.go:117] "RemoveContainer" containerID="54e87d48443a9e2100f7b0208291f5744ffbafd738aa53f7acdb15b2492d5c52" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.478953 4762 generic.go:334] "Generic (PLEG): container finished" podID="1bc55675-0793-4489-b05d-03581df96527" containerID="11cf3d19101d96f9c6a847a44bc8ce0b617c8b5c4e9323fbac2d440cd7e73406" exitCode=1 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.479002 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" event={"ID":"1bc55675-0793-4489-b05d-03581df96527","Type":"ContainerDied","Data":"11cf3d19101d96f9c6a847a44bc8ce0b617c8b5c4e9323fbac2d440cd7e73406"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.479698 4762 scope.go:117] "RemoveContainer" containerID="11cf3d19101d96f9c6a847a44bc8ce0b617c8b5c4e9323fbac2d440cd7e73406" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.481472 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" event={"ID":"97490dfa-d4e5-4013-8a53-199f5872ea4c","Type":"ContainerStarted","Data":"1728858118aafbbe7cc9fff030b4fb47ebdc72a3e615d44c3275f7278bedeb6d"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.481953 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.485716 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.485925 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.485965 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.489968 4762 generic.go:334] "Generic (PLEG): container finished" podID="5edc85d7-4f23-4c94-a998-17f8402c37d3" containerID="216408102c68c6490932598429145a0b9d34a768c4dcb4daf818646e77686486" exitCode=1 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.490014 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" event={"ID":"5edc85d7-4f23-4c94-a998-17f8402c37d3","Type":"ContainerDied","Data":"216408102c68c6490932598429145a0b9d34a768c4dcb4daf818646e77686486"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.490799 4762 scope.go:117] "RemoveContainer" containerID="216408102c68c6490932598429145a0b9d34a768c4dcb4daf818646e77686486" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.492599 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.499222 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.501025 4762 generic.go:334] "Generic (PLEG): container finished" podID="3216ee69-307e-4151-889b-6e71f6e8c47a" containerID="70737e4f4959882428b8977870051d3f4295fdc25ada91a048c3f1cfa5abcd62" exitCode=1 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.501145 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" event={"ID":"3216ee69-307e-4151-889b-6e71f6e8c47a","Type":"ContainerDied","Data":"70737e4f4959882428b8977870051d3f4295fdc25ada91a048c3f1cfa5abcd62"} Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.501700 4762 scope.go:117] "RemoveContainer" containerID="70737e4f4959882428b8977870051d3f4295fdc25ada91a048c3f1cfa5abcd62" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.503010 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"386ccf0fd3fa0449e16e3593c95ba84bdb3918530f66e6a1dd3407bac7522130"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.503098 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://386ccf0fd3fa0449e16e3593c95ba84bdb3918530f66e6a1dd3407bac7522130" gracePeriod=30 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581136 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581164 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581208 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581265 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581282 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581308 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581779 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581867 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581893 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.581967 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.583029 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"c80d66e8e7e5668f7c00d99192f18daca4c57debfeb8a489761d0dd3d149108a"} pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" containerMessage="Container operator failed liveness probe, will be restarted" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.583071 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" containerID="cri-o://c80d66e8e7e5668f7c00d99192f18daca4c57debfeb8a489761d0dd3d149108a" gracePeriod=30 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.677677 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.796976 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.797269 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.796987 4762 patch_prober.go:28] interesting pod/oauth-openshift-9c9dfc54c-9qcbq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.797328 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.65:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.797306 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.797440 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.799070 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"77eaabcbc693bcc171b95666032866ca1103633ba6fb9bcb20b9c9efe19a804a"} pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.850029 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.850086 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.850037 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.850126 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.850151 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.851829 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"b22e9652e34ae45c89cdc44e52741809be38ad85dfcc3388c69148816796b18c"} pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.851868 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" containerID="cri-o://b22e9652e34ae45c89cdc44e52741809be38ad85dfcc3388c69148816796b18c" gracePeriod=30 Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.910583 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 02:02:33 crc kubenswrapper[4762]: I0308 02:02:33.988478 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="2a5c5599-66a7-46b2-8f10-2bfe3905d5fd" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.118:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.036157 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5d5548c9f5-8fxrr" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.229698 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76bf7b6d45-nsgkb" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.256798 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-6d6859c548-phxp4" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.310423 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.379949 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-8r57n" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.414389 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-6dwmz" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.417001 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-c6prb" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.438013 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.473928 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-m7h5s" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.478990 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.479041 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.543579 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" event={"ID":"d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3","Type":"ContainerStarted","Data":"b10dcb70679d8f2fc9eaee5e8d8a1328b5868984f234a2ab85aedc4313a07ec3"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.547801 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.563023 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" event={"ID":"a1b71198-134e-4cec-9f0b-b28979adf785","Type":"ContainerStarted","Data":"3ef22918e81c3ec9448d22e3bcc8b6a793f562bdae017696d0547df65a5b91c7"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.563661 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.565951 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" event={"ID":"3216ee69-307e-4151-889b-6e71f6e8c47a","Type":"ContainerStarted","Data":"d98f9aa25cf208088224f6e611557d636c5a6b43863c9ef4d17dd712d618adb5"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.566875 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.581241 4762 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="386ccf0fd3fa0449e16e3593c95ba84bdb3918530f66e6a1dd3407bac7522130" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.581304 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"386ccf0fd3fa0449e16e3593c95ba84bdb3918530f66e6a1dd3407bac7522130"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.642780 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-4qgst" event={"ID":"35f236f0-d58d-4bb2-a6cd-689097c3fbf4","Type":"ContainerStarted","Data":"bdbea7ad526c9045437999127f48b078c0de6c8502b75cba3e7771d0a9f09a5d"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.646615 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.658662 4762 generic.go:334] "Generic (PLEG): container finished" podID="0707d234-c53e-4212-b289-65a10c0b1502" containerID="6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.658721 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xtl98" event={"ID":"0707d234-c53e-4212-b289-65a10c0b1502","Type":"ContainerDied","Data":"6f49524d3d81fb6b57e665e0100ba29602d39ed8c8839b5fef921864ea3f8c2e"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.686425 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" event={"ID":"2032bfa9-398b-4802-84bc-272c70f31afb","Type":"ContainerStarted","Data":"a465b1f2ae23a52bbef43befc39e4d73fc8ab37caaa6611dbf2be5b05d6e936e"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.686794 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.690283 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" event={"ID":"8fc55d76-cb72-4ac9-b132-24b997e298a3","Type":"ContainerStarted","Data":"28fff4e1a08a2cf48fa6090db997505a096fd0d9353936d2667f3796c85cb1f3"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.691837 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.694680 4762 generic.go:334] "Generic (PLEG): container finished" podID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerID="b22e9652e34ae45c89cdc44e52741809be38ad85dfcc3388c69148816796b18c" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.694827 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" event={"ID":"4e3e6f1d-92e9-411e-a724-03fea1fc802b","Type":"ContainerDied","Data":"b22e9652e34ae45c89cdc44e52741809be38ad85dfcc3388c69148816796b18c"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.703480 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" event={"ID":"2352d4f2-aadc-4ad7-806e-9324d3be5116","Type":"ContainerStarted","Data":"cf9e5d139f2949d00b8ae6e62e5127f62cf9d67464dea27eda625e8bdbf08bd0"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.703813 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.708835 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" event={"ID":"7a27cd53-cc43-4227-a15a-d55e0bfaf81d","Type":"ContainerStarted","Data":"80d4b0c90e20c1540ac3be3983526a8fc767a6a1679f16761f6f89ae360dcd94"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.710462 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.711134 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.711167 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.711931 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" event={"ID":"8e8be3de-e055-441d-bfff-7b966b35dc15","Type":"ContainerStarted","Data":"6acbb6c1d5abd25b7f850486591c9aaf4f4063befc031bc371367574e508d8f0"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.713027 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.734659 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" event={"ID":"7a1f5442-2f22-4dff-b59a-0a8233a83b41","Type":"ContainerStarted","Data":"cd060f6e42646f524391ca8b9f74c66d863c328c43e352bb9d6ab1d7f4cdc71e"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.734978 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.744728 4762 generic.go:334] "Generic (PLEG): container finished" podID="c872048a-5196-4f23-97e2-ce9e611c9ea0" containerID="e53d653fcf859e7c96d8424c7384d641efd4a7905317f7d8c03a2b89c95af947" exitCode=1 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.744793 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" event={"ID":"c872048a-5196-4f23-97e2-ce9e611c9ea0","Type":"ContainerDied","Data":"e53d653fcf859e7c96d8424c7384d641efd4a7905317f7d8c03a2b89c95af947"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.746542 4762 scope.go:117] "RemoveContainer" containerID="e53d653fcf859e7c96d8424c7384d641efd4a7905317f7d8c03a2b89c95af947" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.751722 4762 generic.go:334] "Generic (PLEG): container finished" podID="3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14" containerID="5a6d8fac6a6e64a0518c0702f5a28f4dae3953a5d0f5c4c20afa6d40f8071d06" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.751792 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerDied","Data":"5a6d8fac6a6e64a0518c0702f5a28f4dae3953a5d0f5c4c20afa6d40f8071d06"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.755737 4762 generic.go:334] "Generic (PLEG): container finished" podID="04980224-fe82-485b-83f9-9c3d30b196db" containerID="f9363b8fc8ac34db5baa8b6349034b057f9b994a6c568d3a4bc373a0a2ec92a9" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.755798 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" event={"ID":"04980224-fe82-485b-83f9-9c3d30b196db","Type":"ContainerDied","Data":"f9363b8fc8ac34db5baa8b6349034b057f9b994a6c568d3a4bc373a0a2ec92a9"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.757373 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.765813 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="f50a5390-b172-470a-bcfd-161e360d90db" containerName="galera" containerID="cri-o://ab2d98bfe519b063c5ee46d89c20f223f11a485732e05ac73d9868a4128c5e19" gracePeriod=27 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.769144 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-hwdww" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.770610 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.773215 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" event={"ID":"4de5942e-acf8-4138-acc3-42c177a7f997","Type":"ContainerStarted","Data":"8231bc9674ec3ce41a079ba94df324a70bf18c8e2a27beab0581d19284906030"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.774684 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.774788 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.790931 4762 generic.go:334] "Generic (PLEG): container finished" podID="977085a1-8184-4c52-8e8d-6cb64635e335" containerID="c80d66e8e7e5668f7c00d99192f18daca4c57debfeb8a489761d0dd3d149108a" exitCode=0 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.791001 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" event={"ID":"977085a1-8184-4c52-8e8d-6cb64635e335","Type":"ContainerDied","Data":"c80d66e8e7e5668f7c00d99192f18daca4c57debfeb8a489761d0dd3d149108a"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.797330 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" event={"ID":"6b30a18d-93d3-48de-9b32-7c2326e04220","Type":"ContainerStarted","Data":"221e449e2670b16e679291600a14c683b79ca63a026acae44879794b98336e68"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.798287 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.806498 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" containerID="cri-o://64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5" gracePeriod=26 Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.807289 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" event={"ID":"1bc55675-0793-4489-b05d-03581df96527","Type":"ContainerStarted","Data":"2ee9e3bd64fca9d9b0b3e9d8a55d5b9f0bc677aee81ae1f3f41e88ef31c2a2d0"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.807512 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-m26xv" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.807550 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.811629 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.811677 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.812306 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.812346 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.812360 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" event={"ID":"741e90e6-8de3-4054-94cf-7ada0da0e454","Type":"ContainerStarted","Data":"99c86cfe373a105a50d8c4848d9d226f0be0090c83a5080fb6fbe1e7a0b2c9f7"} Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.812432 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.849913 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-jvlps" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.865167 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-lk8mx" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.867913 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-xmkb6" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.935506 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:34 crc kubenswrapper[4762]: I0308 02:02:34.983007 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.227895 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6dff66bc49-x8f92" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.288574 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.364808 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.464853 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.502933 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.843269 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": dial tcp 10.217.0.25:8081: connect: connection refused" start-of-body= Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.845914 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": dial tcp 10.217.0.25:8081: connect: connection refused" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.843194 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" event={"ID":"977085a1-8184-4c52-8e8d-6cb64635e335","Type":"ContainerStarted","Data":"486197a5a0250e60efd645f486697c22ae7e16740bd6736ee22636d01abc7384"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.845987 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.852038 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xtl98" event={"ID":"0707d234-c53e-4212-b289-65a10c0b1502","Type":"ContainerStarted","Data":"851aadff3b60f8015ab3e791d584083d2815d65a65a8e7d36a4204c5a061c772"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.860809 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" event={"ID":"5edc85d7-4f23-4c94-a998-17f8402c37d3","Type":"ContainerStarted","Data":"66922d0fcdcf4af6a2abdbde15763b399d10e89595bb0b1186a07a84971e1168"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.861279 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.868420 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/2.log" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.872512 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.872582 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e525c74c1899683d6ce26f8b957339d4360377e7194b9d4230854697e9aee8f3"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.880728 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" event={"ID":"20b130fa-d7f7-441a-bd96-0d5858f1ece1","Type":"ContainerStarted","Data":"0b7927bd5981dd9d133a21f4fd48a81121263322448e72f15f2394fcd48b47d0"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.893238 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a544093b8efb706596ce511f04301c3ae26617b0d0a7858da38cf3faf19fb965"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.893946 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.913026 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" event={"ID":"4e3e6f1d-92e9-411e-a724-03fea1fc802b","Type":"ContainerStarted","Data":"8eab1575751b54b57c78f747a8dc62798453a4cbb5a547c85681e0c135b855c6"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.913648 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.914134 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.914178 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.949286 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4j4bt" event={"ID":"3cafb56e-d1ea-48b5-9b1c-691e86cba0d9","Type":"ContainerStarted","Data":"9af09b3f035836ce9a88ec1e40a4043d262c6504029c8ae5819e43783522f7dd"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.951214 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-5546f" event={"ID":"c872048a-5196-4f23-97e2-ce9e611c9ea0","Type":"ContainerStarted","Data":"ec908de665c5924a5933bd2fd48583b1448f0433759cb7cb2d34eb24d91e9950"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.958521 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14","Type":"ContainerStarted","Data":"2740e371676ef0ffbe0eb121e987c144fe290685ed7123f2dd2f4f9a5774feab"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.962940 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" event={"ID":"04980224-fe82-485b-83f9-9c3d30b196db","Type":"ContainerStarted","Data":"45ca7b533e8ccacc571778ae85e3aec33de5c073b5342a40086b79def7b7cf0d"} Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.963565 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.963624 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.964010 4762 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dbz7x container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.964050 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" podUID="4de5942e-acf8-4138-acc3-42c177a7f997" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.964237 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.964259 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 02:02:35 crc kubenswrapper[4762]: I0308 02:02:35.964443 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4j4bt" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.599391 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.601965 4762 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.602033 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.617272 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.618331 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.623739 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.728633 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.728683 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.728845 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.728897 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.796043 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.796090 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.973260 4762 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-jr6wh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.25:8081/healthz\": dial tcp 10.217.0.25:8081: connect: connection refused" start-of-body= Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.973638 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" podUID="977085a1-8184-4c52-8e8d-6cb64635e335" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.25:8081/healthz\": dial tcp 10.217.0.25:8081: connect: connection refused" Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.973286 4762 patch_prober.go:28] interesting pod/route-controller-manager-777f6d5845-rfx2s container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" start-of-body= Mar 08 02:02:36 crc kubenswrapper[4762]: I0308 02:02:36.973717 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" podUID="4e3e6f1d-92e9-411e-a724-03fea1fc802b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.67:8443/healthz\": dial tcp 10.217.0.67:8443: connect: connection refused" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.001451 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.001502 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.001816 4762 patch_prober.go:28] interesting pod/console-operator-58897d9998-tw6wd container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.001844 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" podUID="1d484943-583d-493a-ab04-bf99847ff4c4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.192944 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7585f757fc-xgd5r" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.477397 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.477718 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.478721 4762 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-t95jr container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.478862 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" podUID="a1b71198-134e-4cec-9f0b-b28979adf785" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.542656 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548922-b9s6j"] Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.727042 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Mar 08 02:02:37 crc kubenswrapper[4762]: [+]has-synced ok Mar 08 02:02:37 crc kubenswrapper[4762]: [-]process-running failed: reason withheld Mar 08 02:02:37 crc kubenswrapper[4762]: healthz check failed Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.727125 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:37 crc kubenswrapper[4762]: W0308 02:02:37.755114 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode42fa3a8_143b_4850_89ce_f63ef728708a.slice/crio-3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b WatchSource:0}: Error finding container 3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b: Status 404 returned error can't find the container with id 3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.985645 4762 generic.go:334] "Generic (PLEG): container finished" podID="f50a5390-b172-470a-bcfd-161e360d90db" containerID="ab2d98bfe519b063c5ee46d89c20f223f11a485732e05ac73d9868a4128c5e19" exitCode=0 Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.985748 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerDied","Data":"ab2d98bfe519b063c5ee46d89c20f223f11a485732e05ac73d9868a4128c5e19"} Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.986026 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f50a5390-b172-470a-bcfd-161e360d90db","Type":"ContainerStarted","Data":"bcfafaade34dfe296f65d350e76b052cb6cbab9ed497490330416dadeeb575a9"} Mar 08 02:02:37 crc kubenswrapper[4762]: I0308 02:02:37.987683 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" event={"ID":"e42fa3a8-143b-4850-89ce-f63ef728708a","Type":"ContainerStarted","Data":"3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b"} Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.325174 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-2fjlb" Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.372559 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.372582 4762 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-vwrhn container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.372616 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.372651 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" podUID="741e90e6-8de3-4054-94cf-7ada0da0e454" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.403893 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.403971 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.403983 4762 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-88s4d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.404030 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" podUID="7a27cd53-cc43-4227-a15a-d55e0bfaf81d" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Mar 08 02:02:38 crc kubenswrapper[4762]: I0308 02:02:38.520424 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dbz7x" Mar 08 02:02:39 crc kubenswrapper[4762]: I0308 02:02:39.102750 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:39 crc kubenswrapper[4762]: I0308 02:02:39.128349 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:39 crc kubenswrapper[4762]: I0308 02:02:39.244099 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-xtl98" podUID="0707d234-c53e-4212-b289-65a10c0b1502" containerName="registry-server" probeResult="failure" output=< Mar 08 02:02:39 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:02:39 crc kubenswrapper[4762]: > Mar 08 02:02:39 crc kubenswrapper[4762]: I0308 02:02:39.258714 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:39 crc kubenswrapper[4762]: E0308 02:02:39.471103 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5 is running failed: container process not found" containerID="64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 08 02:02:39 crc kubenswrapper[4762]: E0308 02:02:39.471931 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5 is running failed: container process not found" containerID="64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 08 02:02:39 crc kubenswrapper[4762]: E0308 02:02:39.472233 4762 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5 is running failed: container process not found" containerID="64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 08 02:02:39 crc kubenswrapper[4762]: E0308 02:02:39.472329 4762 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerName="galera" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.012254 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d82ab27-d2d8-486a-8514-2af542e4223a" containerID="64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5" exitCode=0 Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.012453 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerDied","Data":"64889e7ad2b231700461464a45111bfb6b179f3dc972c45ad8da27d64c23a1b5"} Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.012601 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0d82ab27-d2d8-486a-8514-2af542e4223a","Type":"ContainerStarted","Data":"b72ed6c6eaa78e4a6d02ebc034383f03aa895d88f8400431ab9d73e5a4e83e9e"} Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.014472 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" event={"ID":"e42fa3a8-143b-4850-89ce-f63ef728708a","Type":"ContainerStarted","Data":"0799d8760bdb5198f2dd96a304f593e68ab8d2845633b6946d58e3f0cadc802a"} Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.021177 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-kn22k_47ea3169-322b-4246-9a87-515ba6b49133/router/0.log" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.021228 4762 generic.go:334] "Generic (PLEG): container finished" podID="47ea3169-322b-4246-9a87-515ba6b49133" containerID="20dc728d30786c61357663c07671e3894813f2935a0a4dc5797eb9ba02b16e98" exitCode=137 Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.021254 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kn22k" event={"ID":"47ea3169-322b-4246-9a87-515ba6b49133","Type":"ContainerDied","Data":"20dc728d30786c61357663c07671e3894813f2935a0a4dc5797eb9ba02b16e98"} Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.021278 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-kn22k" event={"ID":"47ea3169-322b-4246-9a87-515ba6b49133","Type":"ContainerStarted","Data":"52782d59b855e89a6d446b3abec134680a3affb79703151f77860c10de79383f"} Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.079481 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" podStartSLOduration=39.182773366 podStartE2EDuration="40.078456701s" podCreationTimestamp="2026-03-08 02:02:00 +0000 UTC" firstStartedPulling="2026-03-08 02:02:37.755297739 +0000 UTC m=+5979.229442083" lastFinishedPulling="2026-03-08 02:02:38.650981074 +0000 UTC m=+5980.125125418" observedRunningTime="2026-03-08 02:02:40.069212951 +0000 UTC m=+5981.543357295" watchObservedRunningTime="2026-03-08 02:02:40.078456701 +0000 UTC m=+5981.552601045" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.507921 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-t95jr" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.716973 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.717306 4762 patch_prober.go:28] interesting pod/router-default-5444994796-kn22k container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.717337 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-kn22k" podUID="47ea3169-322b-4246-9a87-515ba6b49133" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.884676 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 08 02:02:40 crc kubenswrapper[4762]: I0308 02:02:40.886053 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 08 02:02:41 crc kubenswrapper[4762]: I0308 02:02:41.718701 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.039666 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.051840 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-kn22k" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.102625 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.102698 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.103564 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"053476c361f5549217982b407d66f0468dd9c13452fc9794567dafd203edb59e"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.103617 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" containerID="cri-o://053476c361f5549217982b407d66f0468dd9c13452fc9794567dafd203edb59e" gracePeriod=30 Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.125093 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.125172 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.135247 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"7312bba8475d2ba2ab481435d20d32e50330bf62a9f67ceee691272d2f4272b6"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.135315 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" containerID="cri-o://7312bba8475d2ba2ab481435d20d32e50330bf62a9f67ceee691272d2f4272b6" gracePeriod=30 Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.264023 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.264089 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-backup-0" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.286675 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"5561f321e82514a4c6213e172f71c7c0e12d4481b063a5e8c26c2db81ddee599"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.286737 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" containerID="cri-o://5561f321e82514a4c6213e172f71c7c0e12d4481b063a5e8c26c2db81ddee599" gracePeriod=30 Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.498167 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-jr6wh" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.590541 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5b4fc57fb8-bgr67" Mar 08 02:02:42 crc kubenswrapper[4762]: I0308 02:02:42.807955 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-777f6d5845-rfx2s" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.057884 4762 generic.go:334] "Generic (PLEG): container finished" podID="e42fa3a8-143b-4850-89ce-f63ef728708a" containerID="0799d8760bdb5198f2dd96a304f593e68ab8d2845633b6946d58e3f0cadc802a" exitCode=0 Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.058278 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" event={"ID":"e42fa3a8-143b-4850-89ce-f63ef728708a","Type":"ContainerDied","Data":"0799d8760bdb5198f2dd96a304f593e68ab8d2845633b6946d58e3f0cadc802a"} Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.252255 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.262340 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.350939 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.351044 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.351631 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p92p\" (UniqueName: \"kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.374436 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.453407 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p92p\" (UniqueName: \"kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.453510 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.453538 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.454044 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.454095 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.809703 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p92p\" (UniqueName: \"kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p\") pod \"community-operators-dtfb5\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.835921 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="65897654-e519-4a6a-9557-2344198bc5cd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.117:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.895702 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:43 crc kubenswrapper[4762]: I0308 02:02:43.987188 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-55b56f86c9-fm7md" Mar 08 02:02:44 crc kubenswrapper[4762]: I0308 02:02:44.016423 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6f659bb4d7-nxfzd" Mar 08 02:02:44 crc kubenswrapper[4762]: I0308 02:02:44.313670 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2nfcz" Mar 08 02:02:44 crc kubenswrapper[4762]: I0308 02:02:44.456210 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-vmb9b" Mar 08 02:02:44 crc kubenswrapper[4762]: I0308 02:02:44.707817 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-pf8l2" Mar 08 02:02:44 crc kubenswrapper[4762]: I0308 02:02:44.895800 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-847b8" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.092575 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-k88bh" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.097595 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-ptbxt" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.098482 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" event={"ID":"e42fa3a8-143b-4850-89ce-f63ef728708a","Type":"ContainerDied","Data":"3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b"} Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.098528 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cf7fd3c5eacc409ffaf38e4422defec536bd69b066c67067aa8a142d9d7836b" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.101080 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-4qgst" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.170921 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:45 crc kubenswrapper[4762]: W0308 02:02:45.196427 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd451fb70_dfd7_414f_89f8_d9cd3953f3bb.slice/crio-6357dd56ba99957ea457e6481926718680ef31eabafbeb222ce008f61a02e3b1 WatchSource:0}: Error finding container 6357dd56ba99957ea457e6481926718680ef31eabafbeb222ce008f61a02e3b1: Status 404 returned error can't find the container with id 6357dd56ba99957ea457e6481926718680ef31eabafbeb222ce008f61a02e3b1 Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.211955 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.299000 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.311333 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sczc\" (UniqueName: \"kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc\") pod \"e42fa3a8-143b-4850-89ce-f63ef728708a\" (UID: \"e42fa3a8-143b-4850-89ce-f63ef728708a\") " Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.319132 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc" (OuterVolumeSpecName: "kube-api-access-7sczc") pod "e42fa3a8-143b-4850-89ce-f63ef728708a" (UID: "e42fa3a8-143b-4850-89ce-f63ef728708a"). InnerVolumeSpecName "kube-api-access-7sczc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.338596 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-zfk9l" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.416058 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sczc\" (UniqueName: \"kubernetes.io/projected/e42fa3a8-143b-4850-89ce-f63ef728708a-kube-api-access-7sczc\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.440099 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-5r7nk" Mar 08 02:02:45 crc kubenswrapper[4762]: I0308 02:02:45.534864 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-xrnnz" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.117054 4762 generic.go:334] "Generic (PLEG): container finished" podID="af0c65d2-782a-49ee-a867-296757df295b" containerID="053476c361f5549217982b407d66f0468dd9c13452fc9794567dafd203edb59e" exitCode=0 Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.117174 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af0c65d2-782a-49ee-a867-296757df295b","Type":"ContainerDied","Data":"053476c361f5549217982b407d66f0468dd9c13452fc9794567dafd203edb59e"} Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.119362 4762 generic.go:334] "Generic (PLEG): container finished" podID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerID="6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3" exitCode=0 Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.119453 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548922-b9s6j" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.127515 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerDied","Data":"6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3"} Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.127549 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerStarted","Data":"6357dd56ba99957ea457e6481926718680ef31eabafbeb222ce008f61a02e3b1"} Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.608109 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.615140 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.728676 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.728714 4762 patch_prober.go:28] interesting pod/downloads-7954f5f757-84dbj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.728724 4762 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.728779 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-84dbj" podUID="6e8e8070-7d3f-4a58-b1ce-6f240bb0170d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.761687 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.854455 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:46 crc kubenswrapper[4762]: I0308 02:02:46.906033 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xtl98" Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.006023 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tw6wd" Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.131512 4762 generic.go:334] "Generic (PLEG): container finished" podID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerID="7312bba8475d2ba2ab481435d20d32e50330bf62a9f67ceee691272d2f4272b6" exitCode=0 Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.131569 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8eea7cf3-6a5e-4661-a544-a48ebc424a89","Type":"ContainerDied","Data":"7312bba8475d2ba2ab481435d20d32e50330bf62a9f67ceee691272d2f4272b6"} Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.133557 4762 generic.go:334] "Generic (PLEG): container finished" podID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerID="5561f321e82514a4c6213e172f71c7c0e12d4481b063a5e8c26c2db81ddee599" exitCode=0 Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.134424 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3fa8be70-ca35-4c49-867c-43a10b8f6f8e","Type":"ContainerDied","Data":"5561f321e82514a4c6213e172f71c7c0e12d4481b063a5e8c26c2db81ddee599"} Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.134451 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"3fa8be70-ca35-4c49-867c-43a10b8f6f8e","Type":"ContainerStarted","Data":"aeaef0ee401cd63d00bd3eb8ec3a53bd2bc9993188eb3863536d1cd1b117fe3a"} Mar 08 02:02:47 crc kubenswrapper[4762]: I0308 02:02:47.234729 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Mar 08 02:02:48 crc kubenswrapper[4762]: I0308 02:02:48.157878 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"8eea7cf3-6a5e-4661-a544-a48ebc424a89","Type":"ContainerStarted","Data":"759df713a2fa76744f7273bee1f71848e2e1bfc5643f01b94b4a428bfde3b635"} Mar 08 02:02:48 crc kubenswrapper[4762]: I0308 02:02:48.164825 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerStarted","Data":"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018"} Mar 08 02:02:48 crc kubenswrapper[4762]: I0308 02:02:48.382395 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-vwrhn" Mar 08 02:02:48 crc kubenswrapper[4762]: I0308 02:02:48.411882 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-88s4d" Mar 08 02:02:48 crc kubenswrapper[4762]: I0308 02:02:48.670581 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4j4bt" Mar 08 02:02:49 crc kubenswrapper[4762]: I0308 02:02:49.175508 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"af0c65d2-782a-49ee-a867-296757df295b","Type":"ContainerStarted","Data":"529cbbc1bfb721fcc238237bc1a11fe2a8b9cc664b5c9218e5fbd67d96c5f1bb"} Mar 08 02:02:49 crc kubenswrapper[4762]: I0308 02:02:49.177150 4762 generic.go:334] "Generic (PLEG): container finished" podID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" containerID="6e746020429a15995843491f803dbdedbba8eb5686eeddfc5a22aec3fbd37be1" exitCode=1 Mar 08 02:02:49 crc kubenswrapper[4762]: I0308 02:02:49.177230 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b14c85df-f56a-4a30-bf25-0f41cd88b32d","Type":"ContainerDied","Data":"6e746020429a15995843491f803dbdedbba8eb5686eeddfc5a22aec3fbd37be1"} Mar 08 02:02:49 crc kubenswrapper[4762]: I0308 02:02:49.470373 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 08 02:02:49 crc kubenswrapper[4762]: I0308 02:02:49.470695 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.084979 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.729012 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.877572 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.879285 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.879941 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.880742 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.880942 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.881114 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.881593 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47vpx\" (UniqueName: \"kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.881710 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.881984 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data\") pod \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\" (UID: \"b14c85df-f56a-4a30-bf25-0f41cd88b32d\") " Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.882442 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.884974 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data" (OuterVolumeSpecName: "config-data") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.885169 4762 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.887965 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.889302 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.898274 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx" (OuterVolumeSpecName: "kube-api-access-47vpx") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "kube-api-access-47vpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.987558 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47vpx\" (UniqueName: \"kubernetes.io/projected/b14c85df-f56a-4a30-bf25-0f41cd88b32d-kube-api-access-47vpx\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.987898 4762 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-config-data\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:50 crc kubenswrapper[4762]: I0308 02:02:50.988011 4762 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b14c85df-f56a-4a30-bf25-0f41cd88b32d-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.014541 4762 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.198525 4762 generic.go:334] "Generic (PLEG): container finished" podID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerID="995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018" exitCode=0 Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.198625 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerDied","Data":"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018"} Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.201415 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.201581 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b14c85df-f56a-4a30-bf25-0f41cd88b32d","Type":"ContainerDied","Data":"b30da89c9562692c811db048d05efc17f8a3a418990f14e084d07f2276656924"} Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.201607 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b30da89c9562692c811db048d05efc17f8a3a418990f14e084d07f2276656924" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.423392 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.423919 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.436328 4762 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.484641 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.505749 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.507117 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b14c85df-f56a-4a30-bf25-0f41cd88b32d" (UID: "b14c85df-f56a-4a30-bf25-0f41cd88b32d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.525816 4762 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ssh-key\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.525857 4762 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b14c85df-f56a-4a30-bf25-0f41cd88b32d-ca-certs\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.525868 4762 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:51 crc kubenswrapper[4762]: I0308 02:02:51.525878 4762 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b14c85df-f56a-4a30-bf25-0f41cd88b32d-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.102699 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.128596 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.217320 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerStarted","Data":"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4"} Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.222875 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" containerID="cri-o://cc56921b87d3b48c9754bb0e5a8c075000c33d418a4763fd63d81afdbf0f1207" gracePeriod=13 Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.241899 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dtfb5" podStartSLOduration=3.6537731940000002 podStartE2EDuration="9.241868733s" podCreationTimestamp="2026-03-08 02:02:43 +0000 UTC" firstStartedPulling="2026-03-08 02:02:46.128957242 +0000 UTC m=+5987.603101586" lastFinishedPulling="2026-03-08 02:02:51.717052781 +0000 UTC m=+5993.191197125" observedRunningTime="2026-03-08 02:02:52.239287792 +0000 UTC m=+5993.713432136" watchObservedRunningTime="2026-03-08 02:02:52.241868733 +0000 UTC m=+5993.716013117" Mar 08 02:02:52 crc kubenswrapper[4762]: I0308 02:02:52.279299 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:53 crc kubenswrapper[4762]: I0308 02:02:53.230862 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-859d87bf79-sbgvn_9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8/console/0.log" Mar 08 02:02:53 crc kubenswrapper[4762]: I0308 02:02:53.231242 4762 generic.go:334] "Generic (PLEG): container finished" podID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerID="cc56921b87d3b48c9754bb0e5a8c075000c33d418a4763fd63d81afdbf0f1207" exitCode=2 Mar 08 02:02:53 crc kubenswrapper[4762]: I0308 02:02:53.231276 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d87bf79-sbgvn" event={"ID":"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8","Type":"ContainerDied","Data":"cc56921b87d3b48c9754bb0e5a8c075000c33d418a4763fd63d81afdbf0f1207"} Mar 08 02:02:53 crc kubenswrapper[4762]: I0308 02:02:53.896420 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:53 crc kubenswrapper[4762]: I0308 02:02:53.896643 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.029957 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="2a5c5599-66a7-46b2-8f10-2bfe3905d5fd" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.118:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.243413 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-859d87bf79-sbgvn_9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8/console/0.log" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.243458 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-859d87bf79-sbgvn" event={"ID":"9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8","Type":"ContainerStarted","Data":"14f81f32415df029c1f29ddf3594abd0bb078437232f9df8d711e867bb1b8f24"} Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.334523 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.334870 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.335308 4762 patch_prober.go:28] interesting pod/console-859d87bf79-sbgvn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.130:8443/health\": dial tcp 10.217.0.130:8443: connect: connection refused" start-of-body= Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.335463 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-859d87bf79-sbgvn" podUID="9c1046ef-c0b1-4c7d-9d36-da9f6652f3c8" containerName="console" probeResult="failure" output="Get \"https://10.217.0.130:8443/health\": dial tcp 10.217.0.130:8443: connect: connection refused" Mar 08 02:02:54 crc kubenswrapper[4762]: I0308 02:02:54.943976 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dtfb5" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" probeResult="failure" output=< Mar 08 02:02:54 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:02:54 crc kubenswrapper[4762]: > Mar 08 02:02:55 crc kubenswrapper[4762]: I0308 02:02:55.110844 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.388818 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 08 02:02:56 crc kubenswrapper[4762]: E0308 02:02:56.389543 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e42fa3a8-143b-4850-89ce-f63ef728708a" containerName="oc" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.389578 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="e42fa3a8-143b-4850-89ce-f63ef728708a" containerName="oc" Mar 08 02:02:56 crc kubenswrapper[4762]: E0308 02:02:56.389607 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" containerName="tempest-tests-tempest-tests-runner" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.389614 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" containerName="tempest-tests-tempest-tests-runner" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.389858 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14c85df-f56a-4a30-bf25-0f41cd88b32d" containerName="tempest-tests-tempest-tests-runner" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.389878 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="e42fa3a8-143b-4850-89ce-f63ef728708a" containerName="oc" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.390721 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.397707 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hw7xm" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.405883 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.537436 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9z2q\" (UniqueName: \"kubernetes.io/projected/899bc607-bcbf-4b42-85e6-7635ff538c92-kube-api-access-x9z2q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.538509 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.640485 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.641531 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9z2q\" (UniqueName: \"kubernetes.io/projected/899bc607-bcbf-4b42-85e6-7635ff538c92-kube-api-access-x9z2q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.641114 4762 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.669666 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9z2q\" (UniqueName: \"kubernetes.io/projected/899bc607-bcbf-4b42-85e6-7635ff538c92-kube-api-access-x9z2q\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.673923 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"899bc607-bcbf-4b42-85e6-7635ff538c92\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.730614 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Mar 08 02:02:56 crc kubenswrapper[4762]: I0308 02:02:56.755621 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-84dbj" Mar 08 02:02:57 crc kubenswrapper[4762]: I0308 02:02:57.153794 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:57 crc kubenswrapper[4762]: I0308 02:02:57.268376 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:02:57 crc kubenswrapper[4762]: I0308 02:02:57.305524 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Mar 08 02:02:57 crc kubenswrapper[4762]: W0308 02:02:57.312879 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899bc607_bcbf_4b42_85e6_7635ff538c92.slice/crio-a2f2c500cce560bd85efb4ad5dafe1b9a21450b53e90f72905b20e60f8569e1d WatchSource:0}: Error finding container a2f2c500cce560bd85efb4ad5dafe1b9a21450b53e90f72905b20e60f8569e1d: Status 404 returned error can't find the container with id a2f2c500cce560bd85efb4ad5dafe1b9a21450b53e90f72905b20e60f8569e1d Mar 08 02:02:58 crc kubenswrapper[4762]: I0308 02:02:58.306313 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"899bc607-bcbf-4b42-85e6-7635ff538c92","Type":"ContainerStarted","Data":"a2f2c500cce560bd85efb4ad5dafe1b9a21450b53e90f72905b20e60f8569e1d"} Mar 08 02:02:58 crc kubenswrapper[4762]: I0308 02:02:58.872433 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" podUID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerName="oauth-openshift" containerID="cri-o://77eaabcbc693bcc171b95666032866ca1103633ba6fb9bcb20b9c9efe19a804a" gracePeriod=15 Mar 08 02:02:59 crc kubenswrapper[4762]: I0308 02:02:59.380258 4762 generic.go:334] "Generic (PLEG): container finished" podID="2e89b5ad-4281-471d-a5c5-55a2351a9cab" containerID="77eaabcbc693bcc171b95666032866ca1103633ba6fb9bcb20b9c9efe19a804a" exitCode=0 Mar 08 02:02:59 crc kubenswrapper[4762]: I0308 02:02:59.380345 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" event={"ID":"2e89b5ad-4281-471d-a5c5-55a2351a9cab","Type":"ContainerDied","Data":"77eaabcbc693bcc171b95666032866ca1103633ba6fb9bcb20b9c9efe19a804a"} Mar 08 02:02:59 crc kubenswrapper[4762]: I0308 02:02:59.383297 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"899bc607-bcbf-4b42-85e6-7635ff538c92","Type":"ContainerStarted","Data":"d3c16a1ccfc3d507a489a5c58187ed65bb6c1b7e36f96c3f21d38c5727c3602f"} Mar 08 02:02:59 crc kubenswrapper[4762]: I0308 02:02:59.405547 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.174346425 podStartE2EDuration="3.405530714s" podCreationTimestamp="2026-03-08 02:02:56 +0000 UTC" firstStartedPulling="2026-03-08 02:02:57.317219841 +0000 UTC m=+5998.791364185" lastFinishedPulling="2026-03-08 02:02:58.54840413 +0000 UTC m=+6000.022548474" observedRunningTime="2026-03-08 02:02:59.403023894 +0000 UTC m=+6000.877168238" watchObservedRunningTime="2026-03-08 02:02:59.405530714 +0000 UTC m=+6000.879675058" Mar 08 02:03:00 crc kubenswrapper[4762]: I0308 02:03:00.105863 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:03:00 crc kubenswrapper[4762]: I0308 02:03:00.393226 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" event={"ID":"2e89b5ad-4281-471d-a5c5-55a2351a9cab","Type":"ContainerStarted","Data":"4a03eab4cd4f82108f350713468368c4cf556f8ed12919d58845febe4d2b408d"} Mar 08 02:03:01 crc kubenswrapper[4762]: I0308 02:03:01.401857 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 02:03:01 crc kubenswrapper[4762]: I0308 02:03:01.425139 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9c9dfc54c-9qcbq" Mar 08 02:03:02 crc kubenswrapper[4762]: I0308 02:03:02.123794 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="8eea7cf3-6a5e-4661-a544-a48ebc424a89" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:03:02 crc kubenswrapper[4762]: I0308 02:03:02.265794 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="3fa8be70-ca35-4c49-867c-43a10b8f6f8e" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:03:02 crc kubenswrapper[4762]: E0308 02:03:02.286193 4762 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.196:58758->38.102.83.196:38853: write tcp 38.102.83.196:58758->38.102.83.196:38853: write: broken pipe Mar 08 02:03:03 crc kubenswrapper[4762]: I0308 02:03:03.768507 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-58b8966548-4d5g2" Mar 08 02:03:03 crc kubenswrapper[4762]: I0308 02:03:03.877973 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="65897654-e519-4a6a-9557-2344198bc5cd" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.117:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:03:03 crc kubenswrapper[4762]: I0308 02:03:03.878066 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 08 02:03:03 crc kubenswrapper[4762]: I0308 02:03:03.879081 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-scheduler" containerStatusID={"Type":"cri-o","ID":"1aeb1800d448d5480416b45b618a5e0dfd29767df74dc7e20f9b1a45bc95cae4"} pod="openstack/manila-scheduler-0" containerMessage="Container manila-scheduler failed liveness probe, will be restarted" Mar 08 02:03:03 crc kubenswrapper[4762]: I0308 02:03:03.879170 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="65897654-e519-4a6a-9557-2344198bc5cd" containerName="manila-scheduler" containerID="cri-o://1aeb1800d448d5480416b45b618a5e0dfd29767df74dc7e20f9b1a45bc95cae4" gracePeriod=30 Mar 08 02:03:04 crc kubenswrapper[4762]: I0308 02:03:04.498387 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:03:04 crc kubenswrapper[4762]: I0308 02:03:04.502237 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-859d87bf79-sbgvn" Mar 08 02:03:04 crc kubenswrapper[4762]: I0308 02:03:04.949923 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dtfb5" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" probeResult="failure" output=< Mar 08 02:03:04 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:03:04 crc kubenswrapper[4762]: > Mar 08 02:03:05 crc kubenswrapper[4762]: I0308 02:03:05.140608 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="af0c65d2-782a-49ee-a867-296757df295b" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 08 02:03:05 crc kubenswrapper[4762]: I0308 02:03:05.491676 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 08 02:03:05 crc kubenswrapper[4762]: I0308 02:03:05.635070 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 08 02:03:05 crc kubenswrapper[4762]: I0308 02:03:05.983461 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548916-qv2ld"] Mar 08 02:03:05 crc kubenswrapper[4762]: I0308 02:03:05.999527 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548916-qv2ld"] Mar 08 02:03:06 crc kubenswrapper[4762]: I0308 02:03:06.002381 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 08 02:03:06 crc kubenswrapper[4762]: I0308 02:03:06.111964 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 08 02:03:07 crc kubenswrapper[4762]: I0308 02:03:07.149028 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Mar 08 02:03:07 crc kubenswrapper[4762]: I0308 02:03:07.299983 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e81de951-6da9-4f87-81e2-21dbcd5eec1c" path="/var/lib/kubelet/pods/e81de951-6da9-4f87-81e2-21dbcd5eec1c/volumes" Mar 08 02:03:07 crc kubenswrapper[4762]: I0308 02:03:07.303010 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Mar 08 02:03:08 crc kubenswrapper[4762]: I0308 02:03:08.276818 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-nh2q6" Mar 08 02:03:10 crc kubenswrapper[4762]: I0308 02:03:10.108245 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.073083 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="2a5c5599-66a7-46b2-8f10-2bfe3905d5fd" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.118:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.073699 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.075160 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-share" containerStatusID={"Type":"cri-o","ID":"4cc47b87fb37c59fc52c90dc77591a2a6cb35632fa2dcf9f76135fc409b3c5e3"} pod="openstack/manila-share-share1-0" containerMessage="Container manila-share failed liveness probe, will be restarted" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.075227 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="2a5c5599-66a7-46b2-8f10-2bfe3905d5fd" containerName="manila-share" containerID="cri-o://4cc47b87fb37c59fc52c90dc77591a2a6cb35632fa2dcf9f76135fc409b3c5e3" gracePeriod=30 Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.551878 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.649371 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:03:14 crc kubenswrapper[4762]: I0308 02:03:14.788696 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:03:15 crc kubenswrapper[4762]: I0308 02:03:15.560257 4762 generic.go:334] "Generic (PLEG): container finished" podID="2a5c5599-66a7-46b2-8f10-2bfe3905d5fd" containerID="4cc47b87fb37c59fc52c90dc77591a2a6cb35632fa2dcf9f76135fc409b3c5e3" exitCode=1 Mar 08 02:03:15 crc kubenswrapper[4762]: I0308 02:03:15.560343 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd","Type":"ContainerDied","Data":"4cc47b87fb37c59fc52c90dc77591a2a6cb35632fa2dcf9f76135fc409b3c5e3"} Mar 08 02:03:16 crc kubenswrapper[4762]: I0308 02:03:16.576752 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"2a5c5599-66a7-46b2-8f10-2bfe3905d5fd","Type":"ContainerStarted","Data":"da9892bdc1b7d5a9732865a9e395cf63ed68f399a6a51c96959fcc0e08854c99"} Mar 08 02:03:16 crc kubenswrapper[4762]: I0308 02:03:16.577690 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dtfb5" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" containerID="cri-o://e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4" gracePeriod=2 Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.551388 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.616204 4762 generic.go:334] "Generic (PLEG): container finished" podID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerID="e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4" exitCode=0 Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.617453 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dtfb5" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.618066 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerDied","Data":"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4"} Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.618102 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dtfb5" event={"ID":"d451fb70-dfd7-414f-89f8-d9cd3953f3bb","Type":"ContainerDied","Data":"6357dd56ba99957ea457e6481926718680ef31eabafbeb222ce008f61a02e3b1"} Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.619844 4762 scope.go:117] "RemoveContainer" containerID="e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.661645 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p92p\" (UniqueName: \"kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p\") pod \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.661950 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities\") pod \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.662051 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content\") pod \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\" (UID: \"d451fb70-dfd7-414f-89f8-d9cd3953f3bb\") " Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.663447 4762 scope.go:117] "RemoveContainer" containerID="995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.666669 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities" (OuterVolumeSpecName: "utilities") pod "d451fb70-dfd7-414f-89f8-d9cd3953f3bb" (UID: "d451fb70-dfd7-414f-89f8-d9cd3953f3bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.698047 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p" (OuterVolumeSpecName: "kube-api-access-2p92p") pod "d451fb70-dfd7-414f-89f8-d9cd3953f3bb" (UID: "d451fb70-dfd7-414f-89f8-d9cd3953f3bb"). InnerVolumeSpecName "kube-api-access-2p92p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.726917 4762 scope.go:117] "RemoveContainer" containerID="6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.756151 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d451fb70-dfd7-414f-89f8-d9cd3953f3bb" (UID: "d451fb70-dfd7-414f-89f8-d9cd3953f3bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.765238 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.765269 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.765283 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p92p\" (UniqueName: \"kubernetes.io/projected/d451fb70-dfd7-414f-89f8-d9cd3953f3bb-kube-api-access-2p92p\") on node \"crc\" DevicePath \"\"" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.796437 4762 scope.go:117] "RemoveContainer" containerID="e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4" Mar 08 02:03:17 crc kubenswrapper[4762]: E0308 02:03:17.798329 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4\": container with ID starting with e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4 not found: ID does not exist" containerID="e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.798665 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4"} err="failed to get container status \"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4\": rpc error: code = NotFound desc = could not find container \"e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4\": container with ID starting with e8a0500192965dbf1a63e82432f5ca73c387e8135bcd35d0d94d9942ae1c5fc4 not found: ID does not exist" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.798698 4762 scope.go:117] "RemoveContainer" containerID="995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018" Mar 08 02:03:17 crc kubenswrapper[4762]: E0308 02:03:17.799275 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018\": container with ID starting with 995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018 not found: ID does not exist" containerID="995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.799298 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018"} err="failed to get container status \"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018\": rpc error: code = NotFound desc = could not find container \"995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018\": container with ID starting with 995617d9ddcdbb08958d445662ff923de3d6ae9e6dabac0d9e9769e98ae06018 not found: ID does not exist" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.799311 4762 scope.go:117] "RemoveContainer" containerID="6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3" Mar 08 02:03:17 crc kubenswrapper[4762]: E0308 02:03:17.799543 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3\": container with ID starting with 6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3 not found: ID does not exist" containerID="6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.799568 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3"} err="failed to get container status \"6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3\": rpc error: code = NotFound desc = could not find container \"6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3\": container with ID starting with 6af9b8cd79201e49b211d7f2abe4642f12c7b8ed9d1e5992b0477520917286e3 not found: ID does not exist" Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.954439 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:03:17 crc kubenswrapper[4762]: I0308 02:03:17.963951 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dtfb5"] Mar 08 02:03:19 crc kubenswrapper[4762]: I0308 02:03:19.276537 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" path="/var/lib/kubelet/pods/d451fb70-dfd7-414f-89f8-d9cd3953f3bb/volumes" Mar 08 02:03:21 crc kubenswrapper[4762]: I0308 02:03:21.390359 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 08 02:03:23 crc kubenswrapper[4762]: I0308 02:03:23.946232 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Mar 08 02:03:34 crc kubenswrapper[4762]: I0308 02:03:34.832292 4762 generic.go:334] "Generic (PLEG): container finished" podID="65897654-e519-4a6a-9557-2344198bc5cd" containerID="1aeb1800d448d5480416b45b618a5e0dfd29767df74dc7e20f9b1a45bc95cae4" exitCode=137 Mar 08 02:03:34 crc kubenswrapper[4762]: I0308 02:03:34.832418 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"65897654-e519-4a6a-9557-2344198bc5cd","Type":"ContainerDied","Data":"1aeb1800d448d5480416b45b618a5e0dfd29767df74dc7e20f9b1a45bc95cae4"} Mar 08 02:03:34 crc kubenswrapper[4762]: I0308 02:03:34.833017 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"65897654-e519-4a6a-9557-2344198bc5cd","Type":"ContainerStarted","Data":"cf0459b07824029722d92a40a145af0d9586d32d53477ecf497f796cf9b47846"} Mar 08 02:03:35 crc kubenswrapper[4762]: I0308 02:03:35.445238 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.497678 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cqww4/must-gather-c5sdj"] Mar 08 02:03:37 crc kubenswrapper[4762]: E0308 02:03:37.503326 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="extract-content" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.503363 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="extract-content" Mar 08 02:03:37 crc kubenswrapper[4762]: E0308 02:03:37.503389 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="extract-utilities" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.503398 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="extract-utilities" Mar 08 02:03:37 crc kubenswrapper[4762]: E0308 02:03:37.503443 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.503450 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.504372 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="d451fb70-dfd7-414f-89f8-d9cd3953f3bb" containerName="registry-server" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.509422 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.519140 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cqww4"/"openshift-service-ca.crt" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.519150 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-cqww4"/"default-dockercfg-dnx6d" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.519147 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cqww4"/"kube-root-ca.crt" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.526630 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cqww4/must-gather-c5sdj"] Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.643821 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj7nc\" (UniqueName: \"kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.644100 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.746479 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj7nc\" (UniqueName: \"kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.746563 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.748099 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.770868 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj7nc\" (UniqueName: \"kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc\") pod \"must-gather-c5sdj\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:37 crc kubenswrapper[4762]: I0308 02:03:37.831131 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:03:38 crc kubenswrapper[4762]: I0308 02:03:38.691031 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cqww4/must-gather-c5sdj"] Mar 08 02:03:38 crc kubenswrapper[4762]: I0308 02:03:38.710922 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 02:03:38 crc kubenswrapper[4762]: I0308 02:03:38.888115 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/must-gather-c5sdj" event={"ID":"0d412f05-a4d8-4b97-be7d-7f78eecd17e9","Type":"ContainerStarted","Data":"638881671ff684cd49442f0a1616e1f7cef897a9fe800c76e45d3d02eedd315b"} Mar 08 02:03:42 crc kubenswrapper[4762]: I0308 02:03:42.851971 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:03:42 crc kubenswrapper[4762]: I0308 02:03:42.852496 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:03:43 crc kubenswrapper[4762]: I0308 02:03:43.728664 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Mar 08 02:03:46 crc kubenswrapper[4762]: I0308 02:03:46.991430 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/must-gather-c5sdj" event={"ID":"0d412f05-a4d8-4b97-be7d-7f78eecd17e9","Type":"ContainerStarted","Data":"b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19"} Mar 08 02:03:46 crc kubenswrapper[4762]: I0308 02:03:46.992117 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/must-gather-c5sdj" event={"ID":"0d412f05-a4d8-4b97-be7d-7f78eecd17e9","Type":"ContainerStarted","Data":"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428"} Mar 08 02:03:47 crc kubenswrapper[4762]: I0308 02:03:47.021088 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cqww4/must-gather-c5sdj" podStartSLOduration=2.785165512 podStartE2EDuration="10.0193887s" podCreationTimestamp="2026-03-08 02:03:37 +0000 UTC" firstStartedPulling="2026-03-08 02:03:38.704365689 +0000 UTC m=+6040.178510033" lastFinishedPulling="2026-03-08 02:03:45.938588867 +0000 UTC m=+6047.412733221" observedRunningTime="2026-03-08 02:03:47.011976517 +0000 UTC m=+6048.486120871" watchObservedRunningTime="2026-03-08 02:03:47.0193887 +0000 UTC m=+6048.493533074" Mar 08 02:03:47 crc kubenswrapper[4762]: I0308 02:03:47.915043 4762 scope.go:117] "RemoveContainer" containerID="06fc938cd18314a4fce4a56b097af49118d1527026bfeb87e840f99dcec32377" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.351195 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cqww4/crc-debug-v5hs5"] Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.353126 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.439862 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvzrw\" (UniqueName: \"kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.439948 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.542320 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvzrw\" (UniqueName: \"kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.542419 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.543408 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.560845 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvzrw\" (UniqueName: \"kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw\") pod \"crc-debug-v5hs5\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: I0308 02:03:52.670623 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:03:52 crc kubenswrapper[4762]: W0308 02:03:52.713001 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f489e05_0731_46ad_a888_e6746ee00ab9.slice/crio-b12804a540c9994db0046d9e294d7a4b0d7a327e1808ff2daedd4770a5f6bff0 WatchSource:0}: Error finding container b12804a540c9994db0046d9e294d7a4b0d7a327e1808ff2daedd4770a5f6bff0: Status 404 returned error can't find the container with id b12804a540c9994db0046d9e294d7a4b0d7a327e1808ff2daedd4770a5f6bff0 Mar 08 02:03:53 crc kubenswrapper[4762]: I0308 02:03:53.072569 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" event={"ID":"1f489e05-0731-46ad-a888-e6746ee00ab9","Type":"ContainerStarted","Data":"b12804a540c9994db0046d9e294d7a4b0d7a327e1808ff2daedd4770a5f6bff0"} Mar 08 02:03:55 crc kubenswrapper[4762]: I0308 02:03:55.725171 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.188945 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548924-k89xp"] Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.191795 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.194591 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.194945 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.195282 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.202119 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548924-k89xp"] Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.331064 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl58z\" (UniqueName: \"kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z\") pod \"auto-csr-approver-29548924-k89xp\" (UID: \"ecaf8110-509c-496a-8ef3-49998975a267\") " pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.433359 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl58z\" (UniqueName: \"kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z\") pod \"auto-csr-approver-29548924-k89xp\" (UID: \"ecaf8110-509c-496a-8ef3-49998975a267\") " pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.475494 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl58z\" (UniqueName: \"kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z\") pod \"auto-csr-approver-29548924-k89xp\" (UID: \"ecaf8110-509c-496a-8ef3-49998975a267\") " pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:00 crc kubenswrapper[4762]: I0308 02:04:00.511554 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:06 crc kubenswrapper[4762]: I0308 02:04:06.208216 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" event={"ID":"1f489e05-0731-46ad-a888-e6746ee00ab9","Type":"ContainerStarted","Data":"f5cd515fe924bdcc712f332feeadaddff9b78abb198e0a58c4e4a20b8c8cf9b6"} Mar 08 02:04:06 crc kubenswrapper[4762]: I0308 02:04:06.452725 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" podStartSLOduration=1.898907061 podStartE2EDuration="14.452706612s" podCreationTimestamp="2026-03-08 02:03:52 +0000 UTC" firstStartedPulling="2026-03-08 02:03:52.715679389 +0000 UTC m=+6054.189823733" lastFinishedPulling="2026-03-08 02:04:05.26947893 +0000 UTC m=+6066.743623284" observedRunningTime="2026-03-08 02:04:06.253077399 +0000 UTC m=+6067.727221743" watchObservedRunningTime="2026-03-08 02:04:06.452706612 +0000 UTC m=+6067.926850956" Mar 08 02:04:06 crc kubenswrapper[4762]: I0308 02:04:06.462510 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548924-k89xp"] Mar 08 02:04:06 crc kubenswrapper[4762]: W0308 02:04:06.464580 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podecaf8110_509c_496a_8ef3_49998975a267.slice/crio-99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0 WatchSource:0}: Error finding container 99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0: Status 404 returned error can't find the container with id 99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0 Mar 08 02:04:07 crc kubenswrapper[4762]: I0308 02:04:07.224153 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548924-k89xp" event={"ID":"ecaf8110-509c-496a-8ef3-49998975a267","Type":"ContainerStarted","Data":"99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0"} Mar 08 02:04:09 crc kubenswrapper[4762]: I0308 02:04:09.250664 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548924-k89xp" event={"ID":"ecaf8110-509c-496a-8ef3-49998975a267","Type":"ContainerStarted","Data":"e388e5be2fd56fb7c37e7094c138b2e5bd3f120b09e7ff2b39aa53d8e437b1ad"} Mar 08 02:04:09 crc kubenswrapper[4762]: I0308 02:04:09.280656 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548924-k89xp" podStartSLOduration=8.468188817 podStartE2EDuration="9.280632437s" podCreationTimestamp="2026-03-08 02:04:00 +0000 UTC" firstStartedPulling="2026-03-08 02:04:06.466701292 +0000 UTC m=+6067.940845626" lastFinishedPulling="2026-03-08 02:04:07.279144892 +0000 UTC m=+6068.753289246" observedRunningTime="2026-03-08 02:04:09.263689184 +0000 UTC m=+6070.737833548" watchObservedRunningTime="2026-03-08 02:04:09.280632437 +0000 UTC m=+6070.754776791" Mar 08 02:04:11 crc kubenswrapper[4762]: I0308 02:04:11.276669 4762 generic.go:334] "Generic (PLEG): container finished" podID="ecaf8110-509c-496a-8ef3-49998975a267" containerID="e388e5be2fd56fb7c37e7094c138b2e5bd3f120b09e7ff2b39aa53d8e437b1ad" exitCode=0 Mar 08 02:04:11 crc kubenswrapper[4762]: I0308 02:04:11.278573 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548924-k89xp" event={"ID":"ecaf8110-509c-496a-8ef3-49998975a267","Type":"ContainerDied","Data":"e388e5be2fd56fb7c37e7094c138b2e5bd3f120b09e7ff2b39aa53d8e437b1ad"} Mar 08 02:04:12 crc kubenswrapper[4762]: I0308 02:04:12.851141 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:04:12 crc kubenswrapper[4762]: I0308 02:04:12.851657 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.277593 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.308155 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548924-k89xp" event={"ID":"ecaf8110-509c-496a-8ef3-49998975a267","Type":"ContainerDied","Data":"99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0"} Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.308513 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99158ca165cb22489eafd728b751f34cfb65d005e5074cc2074c3779713ce6a0" Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.308604 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548924-k89xp" Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.371935 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rl58z\" (UniqueName: \"kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z\") pod \"ecaf8110-509c-496a-8ef3-49998975a267\" (UID: \"ecaf8110-509c-496a-8ef3-49998975a267\") " Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.379511 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z" (OuterVolumeSpecName: "kube-api-access-rl58z") pod "ecaf8110-509c-496a-8ef3-49998975a267" (UID: "ecaf8110-509c-496a-8ef3-49998975a267"). InnerVolumeSpecName "kube-api-access-rl58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:04:14 crc kubenswrapper[4762]: I0308 02:04:14.474007 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rl58z\" (UniqueName: \"kubernetes.io/projected/ecaf8110-509c-496a-8ef3-49998975a267-kube-api-access-rl58z\") on node \"crc\" DevicePath \"\"" Mar 08 02:04:15 crc kubenswrapper[4762]: I0308 02:04:15.374777 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548918-m4mk6"] Mar 08 02:04:15 crc kubenswrapper[4762]: I0308 02:04:15.389045 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548918-m4mk6"] Mar 08 02:04:17 crc kubenswrapper[4762]: I0308 02:04:17.289325 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59" path="/var/lib/kubelet/pods/e387a5b4-d3a0-4a5c-a59f-30ffaeae1c59/volumes" Mar 08 02:04:42 crc kubenswrapper[4762]: I0308 02:04:42.851374 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:04:42 crc kubenswrapper[4762]: I0308 02:04:42.851925 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:04:42 crc kubenswrapper[4762]: I0308 02:04:42.851981 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 02:04:42 crc kubenswrapper[4762]: I0308 02:04:42.852934 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 02:04:42 crc kubenswrapper[4762]: I0308 02:04:42.853002 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" gracePeriod=600 Mar 08 02:04:42 crc kubenswrapper[4762]: E0308 02:04:42.988572 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:04:43 crc kubenswrapper[4762]: I0308 02:04:43.728602 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" exitCode=0 Mar 08 02:04:43 crc kubenswrapper[4762]: I0308 02:04:43.728647 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad"} Mar 08 02:04:43 crc kubenswrapper[4762]: I0308 02:04:43.729792 4762 scope.go:117] "RemoveContainer" containerID="0082d97d952cb42aeca70f0b9e9e306bd0a1280ca1e198865dec731c06cbdd6b" Mar 08 02:04:43 crc kubenswrapper[4762]: I0308 02:04:43.729893 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:04:43 crc kubenswrapper[4762]: E0308 02:04:43.730240 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:04:48 crc kubenswrapper[4762]: I0308 02:04:48.101037 4762 scope.go:117] "RemoveContainer" containerID="a9e3c566b97b8dba3e1514c64e34c3ed4d4126771d40199e4651d826da4304d2" Mar 08 02:04:51 crc kubenswrapper[4762]: I0308 02:04:51.811553 4762 generic.go:334] "Generic (PLEG): container finished" podID="1f489e05-0731-46ad-a888-e6746ee00ab9" containerID="f5cd515fe924bdcc712f332feeadaddff9b78abb198e0a58c4e4a20b8c8cf9b6" exitCode=0 Mar 08 02:04:51 crc kubenswrapper[4762]: I0308 02:04:51.811639 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" event={"ID":"1f489e05-0731-46ad-a888-e6746ee00ab9","Type":"ContainerDied","Data":"f5cd515fe924bdcc712f332feeadaddff9b78abb198e0a58c4e4a20b8c8cf9b6"} Mar 08 02:04:52 crc kubenswrapper[4762]: I0308 02:04:52.966090 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.027492 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-v5hs5"] Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.041852 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-v5hs5"] Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.087137 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host\") pod \"1f489e05-0731-46ad-a888-e6746ee00ab9\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.087297 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host" (OuterVolumeSpecName: "host") pod "1f489e05-0731-46ad-a888-e6746ee00ab9" (UID: "1f489e05-0731-46ad-a888-e6746ee00ab9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.087908 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvzrw\" (UniqueName: \"kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw\") pod \"1f489e05-0731-46ad-a888-e6746ee00ab9\" (UID: \"1f489e05-0731-46ad-a888-e6746ee00ab9\") " Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.089699 4762 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1f489e05-0731-46ad-a888-e6746ee00ab9-host\") on node \"crc\" DevicePath \"\"" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.097069 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw" (OuterVolumeSpecName: "kube-api-access-pvzrw") pod "1f489e05-0731-46ad-a888-e6746ee00ab9" (UID: "1f489e05-0731-46ad-a888-e6746ee00ab9"). InnerVolumeSpecName "kube-api-access-pvzrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.192433 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvzrw\" (UniqueName: \"kubernetes.io/projected/1f489e05-0731-46ad-a888-e6746ee00ab9-kube-api-access-pvzrw\") on node \"crc\" DevicePath \"\"" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.279373 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f489e05-0731-46ad-a888-e6746ee00ab9" path="/var/lib/kubelet/pods/1f489e05-0731-46ad-a888-e6746ee00ab9/volumes" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.842046 4762 scope.go:117] "RemoveContainer" containerID="f5cd515fe924bdcc712f332feeadaddff9b78abb198e0a58c4e4a20b8c8cf9b6" Mar 08 02:04:53 crc kubenswrapper[4762]: I0308 02:04:53.842077 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-v5hs5" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.226359 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cqww4/crc-debug-gmd29"] Mar 08 02:04:54 crc kubenswrapper[4762]: E0308 02:04:54.226839 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f489e05-0731-46ad-a888-e6746ee00ab9" containerName="container-00" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.227212 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f489e05-0731-46ad-a888-e6746ee00ab9" containerName="container-00" Mar 08 02:04:54 crc kubenswrapper[4762]: E0308 02:04:54.227231 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecaf8110-509c-496a-8ef3-49998975a267" containerName="oc" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.227237 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecaf8110-509c-496a-8ef3-49998975a267" containerName="oc" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.227442 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f489e05-0731-46ad-a888-e6746ee00ab9" containerName="container-00" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.227472 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecaf8110-509c-496a-8ef3-49998975a267" containerName="oc" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.228303 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.316857 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.316966 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x6ld\" (UniqueName: \"kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.418749 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x6ld\" (UniqueName: \"kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.419020 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.419272 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.455405 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x6ld\" (UniqueName: \"kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld\") pod \"crc-debug-gmd29\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.550016 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:54 crc kubenswrapper[4762]: I0308 02:04:54.856227 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-gmd29" event={"ID":"5098f0e1-ff1c-4a19-87d2-471cc02081cd","Type":"ContainerStarted","Data":"971b18fee5b0e6eb811f2742b3d59aeca52691e9046d5c74230773b80109f8dc"} Mar 08 02:04:55 crc kubenswrapper[4762]: I0308 02:04:55.869749 4762 generic.go:334] "Generic (PLEG): container finished" podID="5098f0e1-ff1c-4a19-87d2-471cc02081cd" containerID="2164a478b2ef58c28735f96b60d2d3e64ac466666e9ab506a77d6b2fdd906656" exitCode=0 Mar 08 02:04:55 crc kubenswrapper[4762]: I0308 02:04:55.869965 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-gmd29" event={"ID":"5098f0e1-ff1c-4a19-87d2-471cc02081cd","Type":"ContainerDied","Data":"2164a478b2ef58c28735f96b60d2d3e64ac466666e9ab506a77d6b2fdd906656"} Mar 08 02:04:56 crc kubenswrapper[4762]: I0308 02:04:56.986944 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.080095 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x6ld\" (UniqueName: \"kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld\") pod \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.080363 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host\") pod \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\" (UID: \"5098f0e1-ff1c-4a19-87d2-471cc02081cd\") " Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.080466 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host" (OuterVolumeSpecName: "host") pod "5098f0e1-ff1c-4a19-87d2-471cc02081cd" (UID: "5098f0e1-ff1c-4a19-87d2-471cc02081cd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.080844 4762 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5098f0e1-ff1c-4a19-87d2-471cc02081cd-host\") on node \"crc\" DevicePath \"\"" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.086944 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld" (OuterVolumeSpecName: "kube-api-access-7x6ld") pod "5098f0e1-ff1c-4a19-87d2-471cc02081cd" (UID: "5098f0e1-ff1c-4a19-87d2-471cc02081cd"). InnerVolumeSpecName "kube-api-access-7x6ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.182977 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x6ld\" (UniqueName: \"kubernetes.io/projected/5098f0e1-ff1c-4a19-87d2-471cc02081cd-kube-api-access-7x6ld\") on node \"crc\" DevicePath \"\"" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.797854 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-gmd29"] Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.807293 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-gmd29"] Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.892080 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971b18fee5b0e6eb811f2742b3d59aeca52691e9046d5c74230773b80109f8dc" Mar 08 02:04:57 crc kubenswrapper[4762]: I0308 02:04:57.892110 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-gmd29" Mar 08 02:04:58 crc kubenswrapper[4762]: I0308 02:04:58.984004 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cqww4/crc-debug-ktqz6"] Mar 08 02:04:58 crc kubenswrapper[4762]: E0308 02:04:58.984525 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5098f0e1-ff1c-4a19-87d2-471cc02081cd" containerName="container-00" Mar 08 02:04:58 crc kubenswrapper[4762]: I0308 02:04:58.984540 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="5098f0e1-ff1c-4a19-87d2-471cc02081cd" containerName="container-00" Mar 08 02:04:58 crc kubenswrapper[4762]: I0308 02:04:58.984870 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="5098f0e1-ff1c-4a19-87d2-471cc02081cd" containerName="container-00" Mar 08 02:04:58 crc kubenswrapper[4762]: I0308 02:04:58.987854 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.127437 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.128051 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tss9\" (UniqueName: \"kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.230397 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.230481 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tss9\" (UniqueName: \"kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.230904 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.263576 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tss9\" (UniqueName: \"kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9\") pod \"crc-debug-ktqz6\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.275462 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:04:59 crc kubenswrapper[4762]: E0308 02:04:59.275913 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.278098 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5098f0e1-ff1c-4a19-87d2-471cc02081cd" path="/var/lib/kubelet/pods/5098f0e1-ff1c-4a19-87d2-471cc02081cd/volumes" Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.309820 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:04:59 crc kubenswrapper[4762]: W0308 02:04:59.354494 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4344b9a1_268f_4a5c_98d0_86ac1396d20d.slice/crio-1a3a9072e6dab85a3e5f2d9421fed18f2d79c64d9a9dc64bcf2c8cda6be28f18 WatchSource:0}: Error finding container 1a3a9072e6dab85a3e5f2d9421fed18f2d79c64d9a9dc64bcf2c8cda6be28f18: Status 404 returned error can't find the container with id 1a3a9072e6dab85a3e5f2d9421fed18f2d79c64d9a9dc64bcf2c8cda6be28f18 Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.924506 4762 generic.go:334] "Generic (PLEG): container finished" podID="4344b9a1-268f-4a5c-98d0-86ac1396d20d" containerID="c1a7742bede0d223a9adeadedddfec07a97237415c88d3ccfb76a18849eb8d08" exitCode=0 Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.925100 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" event={"ID":"4344b9a1-268f-4a5c-98d0-86ac1396d20d","Type":"ContainerDied","Data":"c1a7742bede0d223a9adeadedddfec07a97237415c88d3ccfb76a18849eb8d08"} Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.925188 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" event={"ID":"4344b9a1-268f-4a5c-98d0-86ac1396d20d","Type":"ContainerStarted","Data":"1a3a9072e6dab85a3e5f2d9421fed18f2d79c64d9a9dc64bcf2c8cda6be28f18"} Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.974390 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-ktqz6"] Mar 08 02:04:59 crc kubenswrapper[4762]: I0308 02:04:59.991161 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cqww4/crc-debug-ktqz6"] Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.047815 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.188474 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tss9\" (UniqueName: \"kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9\") pod \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.188749 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host\") pod \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\" (UID: \"4344b9a1-268f-4a5c-98d0-86ac1396d20d\") " Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.189201 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host" (OuterVolumeSpecName: "host") pod "4344b9a1-268f-4a5c-98d0-86ac1396d20d" (UID: "4344b9a1-268f-4a5c-98d0-86ac1396d20d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.190368 4762 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4344b9a1-268f-4a5c-98d0-86ac1396d20d-host\") on node \"crc\" DevicePath \"\"" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.195118 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9" (OuterVolumeSpecName: "kube-api-access-5tss9") pod "4344b9a1-268f-4a5c-98d0-86ac1396d20d" (UID: "4344b9a1-268f-4a5c-98d0-86ac1396d20d"). InnerVolumeSpecName "kube-api-access-5tss9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.281981 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4344b9a1-268f-4a5c-98d0-86ac1396d20d" path="/var/lib/kubelet/pods/4344b9a1-268f-4a5c-98d0-86ac1396d20d/volumes" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.292524 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tss9\" (UniqueName: \"kubernetes.io/projected/4344b9a1-268f-4a5c-98d0-86ac1396d20d-kube-api-access-5tss9\") on node \"crc\" DevicePath \"\"" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.948083 4762 scope.go:117] "RemoveContainer" containerID="c1a7742bede0d223a9adeadedddfec07a97237415c88d3ccfb76a18849eb8d08" Mar 08 02:05:01 crc kubenswrapper[4762]: I0308 02:05:01.948142 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/crc-debug-ktqz6" Mar 08 02:05:10 crc kubenswrapper[4762]: I0308 02:05:10.263926 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:05:10 crc kubenswrapper[4762]: E0308 02:05:10.265286 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:05:21 crc kubenswrapper[4762]: I0308 02:05:21.264377 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:05:21 crc kubenswrapper[4762]: E0308 02:05:21.265386 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:05:36 crc kubenswrapper[4762]: I0308 02:05:36.263863 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:05:36 crc kubenswrapper[4762]: E0308 02:05:36.264836 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:05:50 crc kubenswrapper[4762]: I0308 02:05:50.264107 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:05:50 crc kubenswrapper[4762]: E0308 02:05:50.264880 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.159902 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548926-snvmr"] Mar 08 02:06:00 crc kubenswrapper[4762]: E0308 02:06:00.161106 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4344b9a1-268f-4a5c-98d0-86ac1396d20d" containerName="container-00" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.161126 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="4344b9a1-268f-4a5c-98d0-86ac1396d20d" containerName="container-00" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.161435 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="4344b9a1-268f-4a5c-98d0-86ac1396d20d" containerName="container-00" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.162410 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.167428 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.167753 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.167976 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.206947 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548926-snvmr"] Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.295382 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgjhg\" (UniqueName: \"kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg\") pod \"auto-csr-approver-29548926-snvmr\" (UID: \"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f\") " pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.398808 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgjhg\" (UniqueName: \"kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg\") pod \"auto-csr-approver-29548926-snvmr\" (UID: \"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f\") " pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.423861 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgjhg\" (UniqueName: \"kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg\") pod \"auto-csr-approver-29548926-snvmr\" (UID: \"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f\") " pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.496738 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:00 crc kubenswrapper[4762]: I0308 02:06:00.966879 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548926-snvmr"] Mar 08 02:06:01 crc kubenswrapper[4762]: I0308 02:06:01.823907 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548926-snvmr" event={"ID":"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f","Type":"ContainerStarted","Data":"45b6d9ba74ec9d079fa5ab4a9cd588c4b05705c2673eae3dfaa58f1c96922568"} Mar 08 02:06:02 crc kubenswrapper[4762]: I0308 02:06:02.839327 4762 generic.go:334] "Generic (PLEG): container finished" podID="3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" containerID="e6cfe622d3b847d080c5a2e6e366d96117ce23d25bc53d8179c3c19d8e7ff671" exitCode=0 Mar 08 02:06:02 crc kubenswrapper[4762]: I0308 02:06:02.839558 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548926-snvmr" event={"ID":"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f","Type":"ContainerDied","Data":"e6cfe622d3b847d080c5a2e6e366d96117ce23d25bc53d8179c3c19d8e7ff671"} Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.260257 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.392692 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgjhg\" (UniqueName: \"kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg\") pod \"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f\" (UID: \"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f\") " Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.418282 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg" (OuterVolumeSpecName: "kube-api-access-xgjhg") pod "3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" (UID: "3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f"). InnerVolumeSpecName "kube-api-access-xgjhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.496130 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgjhg\" (UniqueName: \"kubernetes.io/projected/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f-kube-api-access-xgjhg\") on node \"crc\" DevicePath \"\"" Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.868298 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548926-snvmr" event={"ID":"3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f","Type":"ContainerDied","Data":"45b6d9ba74ec9d079fa5ab4a9cd588c4b05705c2673eae3dfaa58f1c96922568"} Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.868517 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b6d9ba74ec9d079fa5ab4a9cd588c4b05705c2673eae3dfaa58f1c96922568" Mar 08 02:06:04 crc kubenswrapper[4762]: I0308 02:06:04.868562 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548926-snvmr" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.046380 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7f80bd6-31fa-43d5-b6d5-6667fd1486e5/aodh-api/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.236262 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7f80bd6-31fa-43d5-b6d5-6667fd1486e5/aodh-listener/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.246158 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7f80bd6-31fa-43d5-b6d5-6667fd1486e5/aodh-notifier/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.249239 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e7f80bd6-31fa-43d5-b6d5-6667fd1486e5/aodh-evaluator/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.263659 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:06:05 crc kubenswrapper[4762]: E0308 02:06:05.263936 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.322406 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548920-pf9rj"] Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.331521 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548920-pf9rj"] Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.443890 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86c4db5cfd-rtfn2_5b4e0a41-4c23-4e6f-8420-baa2dabdfef6/barbican-api/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.521503 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-86c4db5cfd-rtfn2_5b4e0a41-4c23-4e6f-8420-baa2dabdfef6/barbican-api-log/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.648323 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6875ccb78-kt8h4_9f2c4db5-761b-407b-9e2f-a46ca6bc5675/barbican-keystone-listener/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.759056 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6875ccb78-kt8h4_9f2c4db5-761b-407b-9e2f-a46ca6bc5675/barbican-keystone-listener-log/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.779680 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-65899b8d79-mjhtt_955ae1b9-66fe-47c3-934b-a4372a87e21a/barbican-worker/0.log" Mar 08 02:06:05 crc kubenswrapper[4762]: I0308 02:06:05.817997 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-65899b8d79-mjhtt_955ae1b9-66fe-47c3-934b-a4372a87e21a/barbican-worker-log/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.030421 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-c9fz6_7dd4b57d-ccf5-4e89-afe0-9e7b6b0f4b6a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.038506 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14/ceilometer-central-agent/1.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.214813 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14/ceilometer-central-agent/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.234261 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14/proxy-httpd/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.262935 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14/ceilometer-notification-agent/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.293084 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3b8b24a9-6c2e-43cc-ab5f-f5b85e18be14/sg-core/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.437797 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-c8p2k_cfed1788-c505-44a2-93ee-cdb89f86f1a7/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.505478 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-4mp82_c0a2baeb-7f9c-47e2-b201-cd7ff3641547/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.774045 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_675453de-83d4-4420-a560-4a11696a849c/cinder-api/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.834868 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_675453de-83d4-4420-a560-4a11696a849c/cinder-api-log/0.log" Mar 08 02:06:06 crc kubenswrapper[4762]: I0308 02:06:06.911857 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3fa8be70-ca35-4c49-867c-43a10b8f6f8e/cinder-backup/1.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.065348 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3fa8be70-ca35-4c49-867c-43a10b8f6f8e/cinder-backup/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.081490 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_3fa8be70-ca35-4c49-867c-43a10b8f6f8e/probe/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.126678 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_af0c65d2-782a-49ee-a867-296757df295b/cinder-scheduler/1.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.267508 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_af0c65d2-782a-49ee-a867-296757df295b/cinder-scheduler/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.279103 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313b0ee7-5304-4ad5-a676-84faabdbfdd8" path="/var/lib/kubelet/pods/313b0ee7-5304-4ad5-a676-84faabdbfdd8/volumes" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.333430 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_af0c65d2-782a-49ee-a867-296757df295b/probe/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.357311 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_8eea7cf3-6a5e-4661-a544-a48ebc424a89/cinder-volume/1.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.481889 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_8eea7cf3-6a5e-4661-a544-a48ebc424a89/cinder-volume/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.543854 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_8eea7cf3-6a5e-4661-a544-a48ebc424a89/probe/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.633286 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vq6fs_60632753-693e-4352-b416-3b64699b7e67/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.789575 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-m5qdc_64ea8060-e683-425e-8ae0-0250d59a2c46/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:07 crc kubenswrapper[4762]: I0308 02:06:07.890047 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c8d8d886c-75966_328fe58c-2b7a-4e69-8103-0d7dcf57d008/init/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.086357 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c8d8d886c-75966_328fe58c-2b7a-4e69-8103-0d7dcf57d008/init/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.112010 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_426654cc-0d6c-4a1c-8614-2e7be9e750fe/glance-httpd/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.138256 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-c8d8d886c-75966_328fe58c-2b7a-4e69-8103-0d7dcf57d008/dnsmasq-dns/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.275247 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_426654cc-0d6c-4a1c-8614-2e7be9e750fe/glance-log/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.358655 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56528a09-adcd-4337-80e8-3848a7cfa652/glance-httpd/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.381090 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_56528a09-adcd-4337-80e8-3848a7cfa652/glance-log/0.log" Mar 08 02:06:08 crc kubenswrapper[4762]: I0308 02:06:08.788220 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-7588678759-6jpjt_9976fcf2-7f49-45af-afe2-d5c3e07f2cac/heat-engine/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.193528 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c45886cfb-v4xv2_f395fca0-1bc0-43fe-aca6-4910f6ca3347/horizon/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.327157 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-gt49h_25c01bbb-608e-408d-acd4-636bf28176e3/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.409409 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c45886cfb-v4xv2_f395fca0-1bc0-43fe-aca6-4910f6ca3347/horizon-log/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.451725 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-74bd49b68d-nxdxh_08f3408b-7c06-4574-b984-a4bd2ee0d99f/heat-api/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.520914 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-5f957556fb-8j7fl_d4a45fcf-70a0-4fc6-a592-f2c588936b3c/heat-cfnapi/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.596576 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-n6jzp_648ab410-6f12-42c0-83de-35e6a44712b1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:09 crc kubenswrapper[4762]: I0308 02:06:09.730612 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29548861-bhbxf_d3638a10-cb89-4a5e-bd32-db41c873db68/keystone-cron/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.074726 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29548921-9c9cl_4c591691-ded4-4e08-8401-18558cbaf829/keystone-cron/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.246306 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_b14f9065-ffe7-430a-b9e9-f62ce942558e/kube-state-metrics/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.377684 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qmzjt_ff17c293-0613-494b-a138-a29de53bb297/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.474133 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-b9c87cdf8-vw485_bcb030ee-3f45-488d-8e3c-6c0fa66fdaf4/keystone-api/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.527980 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-mpgrv_b52f8064-ed82-432d-9846-87e8a5282382/logging-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.643874 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_88cfd6da-0fda-4f8d-8611-d609aa5b3276/manila-api/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.700810 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_88cfd6da-0fda-4f8d-8611-d609aa5b3276/manila-api-log/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.837869 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_65897654-e519-4a6a-9557-2344198bc5cd/manila-scheduler/1.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.906250 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_65897654-e519-4a6a-9557-2344198bc5cd/manila-scheduler/0.log" Mar 08 02:06:10 crc kubenswrapper[4762]: I0308 02:06:10.912987 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_65897654-e519-4a6a-9557-2344198bc5cd/probe/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.072801 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2a5c5599-66a7-46b2-8f10-2bfe3905d5fd/manila-share/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.104221 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2a5c5599-66a7-46b2-8f10-2bfe3905d5fd/probe/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.106874 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_2a5c5599-66a7-46b2-8f10-2bfe3905d5fd/manila-share/1.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.308461 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_231f2489-76c6-4bba-92aa-65f049a666de/mysqld-exporter/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.647634 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-849f745c8c-pjhz2_aa4ce185-2a5f-4da0-89e0-ad8fb82bd170/neutron-api/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.658835 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v255c_6fdab820-9b0c-4bb9-b3aa-c00329fcf356/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:11 crc kubenswrapper[4762]: I0308 02:06:11.676840 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-849f745c8c-pjhz2_aa4ce185-2a5f-4da0-89e0-ad8fb82bd170/neutron-httpd/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.273418 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_1c1e4f25-7c6d-451d-bf78-2b1aa728b80e/nova-cell0-conductor-conductor/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.404461 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_312fe1d6-7a03-4cb5-8675-3863ce774c6f/nova-api-log/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.643500 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_88cfd032-2d2e-4680-bbcb-22eac7f47578/nova-cell1-conductor-conductor/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.747436 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5fcc2528-a57a-4197-879c-cd345baf4513/nova-cell1-novncproxy-novncproxy/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.887628 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_312fe1d6-7a03-4cb5-8675-3863ce774c6f/nova-api-api/0.log" Mar 08 02:06:12 crc kubenswrapper[4762]: I0308 02:06:12.906630 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zmqjz_41caed5b-cdb9-492f-bb7b-6b799e1811e0/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.076208 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f3231a23-d920-4cf2-b78f-65ecf0d67c77/nova-metadata-log/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.380948 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f50a5390-b172-470a-bcfd-161e360d90db/mysql-bootstrap/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.404129 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b9f29ab-520d-47cb-85dc-cd128b475b2a/nova-scheduler-scheduler/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.660532 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f50a5390-b172-470a-bcfd-161e360d90db/galera/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.660959 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f50a5390-b172-470a-bcfd-161e360d90db/mysql-bootstrap/0.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.704856 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f50a5390-b172-470a-bcfd-161e360d90db/galera/1.log" Mar 08 02:06:13 crc kubenswrapper[4762]: I0308 02:06:13.860965 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d82ab27-d2d8-486a-8514-2af542e4223a/mysql-bootstrap/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.109564 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d82ab27-d2d8-486a-8514-2af542e4223a/mysql-bootstrap/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.153052 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d82ab27-d2d8-486a-8514-2af542e4223a/galera/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.161879 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0d82ab27-d2d8-486a-8514-2af542e4223a/galera/1.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.367338 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_59501363-c16d-4d5b-97b4-42322e95ab83/openstackclient/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.631232 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-kkckg_bcab1df7-ddcc-4784-8a49-0be5161590f2/ovn-controller/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.682431 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b987k_63452061-1f2b-471a-bb81-e71fa2249560/openstack-network-exporter/0.log" Mar 08 02:06:14 crc kubenswrapper[4762]: I0308 02:06:14.873806 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffhbt_21774b04-29d4-4687-b650-87eed791f3e8/ovsdb-server-init/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.053546 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffhbt_21774b04-29d4-4687-b650-87eed791f3e8/ovsdb-server-init/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.081214 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffhbt_21774b04-29d4-4687-b650-87eed791f3e8/ovs-vswitchd/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.108097 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ffhbt_21774b04-29d4-4687-b650-87eed791f3e8/ovsdb-server/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.336697 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-58527_9644de97-590e-4e5d-b951-241947044e95/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.382175 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_f3231a23-d920-4cf2-b78f-65ecf0d67c77/nova-metadata-metadata/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.535403 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_522084fd-c43a-45ad-a62a-a6a24d4e1a1b/ovn-northd/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.543457 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_522084fd-c43a-45ad-a62a-a6a24d4e1a1b/openstack-network-exporter/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.693831 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c3e5a947-ec53-4871-a0d8-c51ca14cf8c4/openstack-network-exporter/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.762371 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c3e5a947-ec53-4871-a0d8-c51ca14cf8c4/ovsdbserver-nb/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.923611 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c232bd40-b650-4530-8df3-2fd1c3f57398/openstack-network-exporter/0.log" Mar 08 02:06:15 crc kubenswrapper[4762]: I0308 02:06:15.938668 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c232bd40-b650-4530-8df3-2fd1c3f57398/ovsdbserver-sb/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.134642 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b78746fdd-smtch_862ffdc5-bd1d-421a-9e37-0752fdf2c05f/placement-api/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.240478 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8103d22d-043e-4af1-a19d-307905e2a05f/init-config-reloader/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.253347 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5b78746fdd-smtch_862ffdc5-bd1d-421a-9e37-0752fdf2c05f/placement-log/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.263426 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:06:16 crc kubenswrapper[4762]: E0308 02:06:16.263848 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.427676 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8103d22d-043e-4af1-a19d-307905e2a05f/init-config-reloader/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.446610 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8103d22d-043e-4af1-a19d-307905e2a05f/config-reloader/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.475224 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8103d22d-043e-4af1-a19d-307905e2a05f/prometheus/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.504845 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_8103d22d-043e-4af1-a19d-307905e2a05f/thanos-sidecar/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.667928 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cea7862c-6515-43de-826c-87e285980ca0/setup-container/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.836858 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cea7862c-6515-43de-826c-87e285980ca0/setup-container/0.log" Mar 08 02:06:16 crc kubenswrapper[4762]: I0308 02:06:16.890614 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_cea7862c-6515-43de-826c-87e285980ca0/rabbitmq/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.003610 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83567ea1-f607-4be2-b0af-6d09bcf74e06/setup-container/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.117450 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83567ea1-f607-4be2-b0af-6d09bcf74e06/setup-container/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.198984 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bpm7j_11eeb278-f92e-410f-93b6-1797527d31ad/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.205854 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_83567ea1-f607-4be2-b0af-6d09bcf74e06/rabbitmq/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.401715 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-g4kwf_6833455d-3ceb-4dc6-9722-641f0b1dc40c/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.507552 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-4szkg_b2693512-468b-4a97-94f1-27bb9301963a/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.627698 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-d7dkx_71462aef-16e4-4c41-a3ad-11664b64443d/ssh-known-hosts-edpm-deployment/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.855389 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d78d68b57-45zwj_fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa/proxy-server/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.911258 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ch7rd_6720c495-ef50-49f5-ae64-d3f0bcca1f68/swift-ring-rebalance/0.log" Mar 08 02:06:17 crc kubenswrapper[4762]: I0308 02:06:17.983206 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6d78d68b57-45zwj_fe1eee87-49ac-4e74-99a8-9cf1ab3a0eaa/proxy-httpd/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.125902 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/account-auditor/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.126134 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/account-reaper/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.275233 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/account-replicator/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.304257 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/container-auditor/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.366207 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/account-server/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.445527 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/container-replicator/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.510507 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/container-server/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.534010 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/container-updater/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.637430 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/object-auditor/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.668327 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/object-expirer/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.782702 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/object-server/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.802761 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/object-replicator/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.869379 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/object-updater/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.880584 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/rsync/0.log" Mar 08 02:06:18 crc kubenswrapper[4762]: I0308 02:06:18.965244 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_eb5158d2-f742-4eef-8c66-f2db685aeb9e/swift-recon-cron/0.log" Mar 08 02:06:19 crc kubenswrapper[4762]: I0308 02:06:19.158982 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mgkxc_01ee853c-8ebd-41f2-8ea3-6a935a9ff7dc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:19 crc kubenswrapper[4762]: I0308 02:06:19.294038 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-8dm8s_677ad41b-e2cb-4329-a014-3427d8ec936b/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:19 crc kubenswrapper[4762]: I0308 02:06:19.522499 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_899bc607-bcbf-4b42-85e6-7635ff538c92/test-operator-logs-container/0.log" Mar 08 02:06:19 crc kubenswrapper[4762]: I0308 02:06:19.707523 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-b25bg_7805ca0f-f81b-450a-b661-c8599dd9e719/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Mar 08 02:06:19 crc kubenswrapper[4762]: I0308 02:06:19.816808 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b14c85df-f56a-4a30-bf25-0f41cd88b32d/tempest-tests-tempest-tests-runner/0.log" Mar 08 02:06:24 crc kubenswrapper[4762]: I0308 02:06:24.700065 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_58ff3546-162f-4796-961b-2943d7465355/memcached/0.log" Mar 08 02:06:28 crc kubenswrapper[4762]: I0308 02:06:28.263384 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:06:28 crc kubenswrapper[4762]: E0308 02:06:28.264146 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:39 crc kubenswrapper[4762]: I0308 02:06:39.278779 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:06:39 crc kubenswrapper[4762]: E0308 02:06:39.279660 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.203968 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/util/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.286900 4762 scope.go:117] "RemoveContainer" containerID="2eff4f53374ec438afb4012ff645ba6da77f98d55e375b6d61fb5b5e6f3d5850" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.477616 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/util/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.506257 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/pull/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.510964 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/pull/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.668420 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/util/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.677243 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/extract/0.log" Mar 08 02:06:48 crc kubenswrapper[4762]: I0308 02:06:48.689267 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_3f810e9acf798d0455e71d358a8a28632c11ce735858cb6b14e22ee64b7cpsm_31b564be-f28f-428b-a65e-fd3521c7b9f3/pull/0.log" Mar 08 02:06:49 crc kubenswrapper[4762]: I0308 02:06:49.384645 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-8r57n_60096a41-cef5-4818-a549-96b51b04cd8f/manager/0.log" Mar 08 02:06:49 crc kubenswrapper[4762]: I0308 02:06:49.990788 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-c6prb_625fe5b5-181a-47db-8656-00c8f5fc045f/manager/0.log" Mar 08 02:06:50 crc kubenswrapper[4762]: I0308 02:06:50.418429 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-vmb9b_2352d4f2-aadc-4ad7-806e-9324d3be5116/manager/1.log" Mar 08 02:06:50 crc kubenswrapper[4762]: I0308 02:06:50.764280 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-vmb9b_2352d4f2-aadc-4ad7-806e-9324d3be5116/manager/0.log" Mar 08 02:06:51 crc kubenswrapper[4762]: I0308 02:06:51.189657 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-m7h5s_d5f0be01-26e9-4c4e-8122-61659529e505/manager/0.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.178528 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-6dwmz_e2cdcc67-fa0d-4f82-9ca7-219626ee5fdd/manager/0.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.302961 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-dtdxk_bdf1d0cf-9a6b-44b9-8b97-102d037a0f0b/manager/0.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.489150 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-pf8l2_5edc85d7-4f23-4c94-a998-17f8402c37d3/manager/1.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.620146 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-pf8l2_5edc85d7-4f23-4c94-a998-17f8402c37d3/manager/0.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.880361 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-hwdww_ead6b665-cd0f-475a-a71b-33fd36246484/manager/0.log" Mar 08 02:06:52 crc kubenswrapper[4762]: I0308 02:06:52.969717 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-m26xv_7f6a4543-a300-4393-93e0-fcfeae3ccd61/manager/0.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.120451 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-jvlps_ac0364ec-ad05-431d-b2f4-c92353f15f4c/manager/0.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.285717 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-lk8mx_da66283d-dd88-4e6a-a4ad-496064bc8a78/manager/0.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.529360 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-xmkb6_05d1f89d-b2b2-48ff-8555-e9f68ac3300a/manager/0.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.552091 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-847b8_6b30a18d-93d3-48de-9b32-7c2326e04220/manager/1.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.765431 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-847b8_6b30a18d-93d3-48de-9b32-7c2326e04220/manager/0.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.811316 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2_8fc55d76-cb72-4ac9-b132-24b997e298a3/manager/1.log" Mar 08 02:06:53 crc kubenswrapper[4762]: I0308 02:06:53.981815 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c8zvf2_8fc55d76-cb72-4ac9-b132-24b997e298a3/manager/0.log" Mar 08 02:06:54 crc kubenswrapper[4762]: I0308 02:06:54.043727 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b4fc57fb8-bgr67_2032bfa9-398b-4802-84bc-272c70f31afb/operator/1.log" Mar 08 02:06:54 crc kubenswrapper[4762]: I0308 02:06:54.263918 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:06:54 crc kubenswrapper[4762]: E0308 02:06:54.264297 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:06:54 crc kubenswrapper[4762]: I0308 02:06:54.499971 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5b4fc57fb8-bgr67_2032bfa9-398b-4802-84bc-272c70f31afb/operator/0.log" Mar 08 02:06:54 crc kubenswrapper[4762]: I0308 02:06:54.745522 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xtl98_0707d234-c53e-4212-b289-65a10c0b1502/registry-server/0.log" Mar 08 02:06:54 crc kubenswrapper[4762]: I0308 02:06:54.754202 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xtl98_0707d234-c53e-4212-b289-65a10c0b1502/registry-server/1.log" Mar 08 02:06:55 crc kubenswrapper[4762]: I0308 02:06:55.131565 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-dh78h_1906010e-f253-4d33-8e97-96d8860c3ff6/manager/0.log" Mar 08 02:06:55 crc kubenswrapper[4762]: I0308 02:06:55.395462 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-ptbxt_d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3/manager/1.log" Mar 08 02:06:55 crc kubenswrapper[4762]: I0308 02:06:55.664075 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-ptbxt_d999e2f1-c8f0-4e5f-b4ad-502b3e5413e3/manager/0.log" Mar 08 02:06:55 crc kubenswrapper[4762]: I0308 02:06:55.850589 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5546f_c872048a-5196-4f23-97e2-ce9e611c9ea0/operator/1.log" Mar 08 02:06:55 crc kubenswrapper[4762]: I0308 02:06:55.892130 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-5546f_c872048a-5196-4f23-97e2-ce9e611c9ea0/operator/0.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.101736 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-k88bh_3216ee69-307e-4151-889b-6e71f6e8c47a/manager/1.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.260703 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-k88bh_3216ee69-307e-4151-889b-6e71f6e8c47a/manager/0.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.681195 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-zfk9l_1bc55675-0793-4489-b05d-03581df96527/manager/1.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.733210 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-2nfcz_20b130fa-d7f7-441a-bd96-0d5858f1ece1/manager/1.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.860145 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-zfk9l_1bc55675-0793-4489-b05d-03581df96527/manager/0.log" Mar 08 02:06:56 crc kubenswrapper[4762]: I0308 02:06:56.965286 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6dff66bc49-x8f92_f82c21a8-e080-4d70-b898-8c15a7b71989/manager/0.log" Mar 08 02:06:57 crc kubenswrapper[4762]: I0308 02:06:57.057700 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-5r7nk_8e8be3de-e055-441d-bfff-7b966b35dc15/manager/1.log" Mar 08 02:06:57 crc kubenswrapper[4762]: I0308 02:06:57.148423 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-5r7nk_8e8be3de-e055-441d-bfff-7b966b35dc15/manager/0.log" Mar 08 02:06:57 crc kubenswrapper[4762]: I0308 02:06:57.320499 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7585f757fc-xgd5r_4d895a55-fc09-4986-ae61-19b0c5425d15/manager/0.log" Mar 08 02:07:02 crc kubenswrapper[4762]: I0308 02:07:02.900043 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-2nfcz_20b130fa-d7f7-441a-bd96-0d5858f1ece1/manager/0.log" Mar 08 02:07:05 crc kubenswrapper[4762]: I0308 02:07:05.263321 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:07:05 crc kubenswrapper[4762]: E0308 02:07:05.264066 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:07:20 crc kubenswrapper[4762]: I0308 02:07:20.060312 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vplbt_68ab86d6-f824-445d-b441-b7cbba73630b/control-plane-machine-set-operator/0.log" Mar 08 02:07:20 crc kubenswrapper[4762]: I0308 02:07:20.184609 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-frbvk_f42edfa8-610d-4cdf-a0db-63d3ccad4615/kube-rbac-proxy/0.log" Mar 08 02:07:20 crc kubenswrapper[4762]: I0308 02:07:20.264051 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:07:20 crc kubenswrapper[4762]: E0308 02:07:20.264366 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:07:20 crc kubenswrapper[4762]: I0308 02:07:20.268978 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-frbvk_f42edfa8-610d-4cdf-a0db-63d3ccad4615/machine-api-operator/0.log" Mar 08 02:07:31 crc kubenswrapper[4762]: I0308 02:07:31.264079 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:07:31 crc kubenswrapper[4762]: E0308 02:07:31.265126 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:07:35 crc kubenswrapper[4762]: I0308 02:07:35.932518 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-zwr24_f5da9b45-f4fb-4271-b27c-0d3e6251513c/cert-manager-controller/0.log" Mar 08 02:07:36 crc kubenswrapper[4762]: I0308 02:07:36.099180 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-fbthd_604f6908-a2e3-47b3-82af-b2dd6dc5dde2/cert-manager-cainjector/0.log" Mar 08 02:07:36 crc kubenswrapper[4762]: I0308 02:07:36.198054 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-8pp92_9056b43f-9cc2-446b-a516-04ba97bf2fd0/cert-manager-webhook/0.log" Mar 08 02:07:42 crc kubenswrapper[4762]: I0308 02:07:42.263621 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:07:42 crc kubenswrapper[4762]: E0308 02:07:42.264965 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:07:51 crc kubenswrapper[4762]: I0308 02:07:51.807956 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-xkpln_d4b2992c-176e-427f-8126-78b2f3992745/nmstate-console-plugin/0.log" Mar 08 02:07:51 crc kubenswrapper[4762]: I0308 02:07:51.992108 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xtp5w_ebd76fbf-3a5c-409a-9c6c-5052042a769c/nmstate-handler/0.log" Mar 08 02:07:52 crc kubenswrapper[4762]: I0308 02:07:52.047116 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-2fg7x_5f120575-fb49-4228-a522-8d5182663b94/nmstate-metrics/0.log" Mar 08 02:07:52 crc kubenswrapper[4762]: I0308 02:07:52.048822 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-2fg7x_5f120575-fb49-4228-a522-8d5182663b94/kube-rbac-proxy/0.log" Mar 08 02:07:52 crc kubenswrapper[4762]: I0308 02:07:52.213517 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-2fjlb_274d72c4-da34-4213-9aa4-daa52cf6668f/nmstate-webhook/0.log" Mar 08 02:07:52 crc kubenswrapper[4762]: I0308 02:07:52.269274 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-zdlpc_5d4fac50-a9cd-48c4-897d-03de9b1454be/nmstate-operator/0.log" Mar 08 02:07:53 crc kubenswrapper[4762]: I0308 02:07:53.264251 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:07:53 crc kubenswrapper[4762]: E0308 02:07:53.264946 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.166581 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548928-z5kfc"] Mar 08 02:08:00 crc kubenswrapper[4762]: E0308 02:08:00.167976 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" containerName="oc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.167999 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" containerName="oc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.168644 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" containerName="oc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.169942 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.172324 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.172573 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.172904 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.198927 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548928-z5kfc"] Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.206631 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpm2\" (UniqueName: \"kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2\") pod \"auto-csr-approver-29548928-z5kfc\" (UID: \"97613940-eb08-4bcb-9e99-08dd386e6843\") " pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.308668 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpm2\" (UniqueName: \"kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2\") pod \"auto-csr-approver-29548928-z5kfc\" (UID: \"97613940-eb08-4bcb-9e99-08dd386e6843\") " pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.332508 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpm2\" (UniqueName: \"kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2\") pod \"auto-csr-approver-29548928-z5kfc\" (UID: \"97613940-eb08-4bcb-9e99-08dd386e6843\") " pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.490472 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:00 crc kubenswrapper[4762]: I0308 02:08:00.973916 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548928-z5kfc"] Mar 08 02:08:01 crc kubenswrapper[4762]: I0308 02:08:01.176621 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" event={"ID":"97613940-eb08-4bcb-9e99-08dd386e6843","Type":"ContainerStarted","Data":"61ca17294ea4891f6c26070ce5fd79b235c93de98cdcc09d0687f4cb9dde77f6"} Mar 08 02:08:03 crc kubenswrapper[4762]: I0308 02:08:03.202953 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" event={"ID":"97613940-eb08-4bcb-9e99-08dd386e6843","Type":"ContainerStarted","Data":"06e428f27203e4309da9073bf6df2765a76d74757a4c5c7133f7d560919785c4"} Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.218956 4762 generic.go:334] "Generic (PLEG): container finished" podID="97613940-eb08-4bcb-9e99-08dd386e6843" containerID="06e428f27203e4309da9073bf6df2765a76d74757a4c5c7133f7d560919785c4" exitCode=0 Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.219043 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" event={"ID":"97613940-eb08-4bcb-9e99-08dd386e6843","Type":"ContainerDied","Data":"06e428f27203e4309da9073bf6df2765a76d74757a4c5c7133f7d560919785c4"} Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.648608 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.724664 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngpm2\" (UniqueName: \"kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2\") pod \"97613940-eb08-4bcb-9e99-08dd386e6843\" (UID: \"97613940-eb08-4bcb-9e99-08dd386e6843\") " Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.732190 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2" (OuterVolumeSpecName: "kube-api-access-ngpm2") pod "97613940-eb08-4bcb-9e99-08dd386e6843" (UID: "97613940-eb08-4bcb-9e99-08dd386e6843"). InnerVolumeSpecName "kube-api-access-ngpm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:08:04 crc kubenswrapper[4762]: I0308 02:08:04.826870 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngpm2\" (UniqueName: \"kubernetes.io/projected/97613940-eb08-4bcb-9e99-08dd386e6843-kube-api-access-ngpm2\") on node \"crc\" DevicePath \"\"" Mar 08 02:08:05 crc kubenswrapper[4762]: I0308 02:08:05.232428 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" event={"ID":"97613940-eb08-4bcb-9e99-08dd386e6843","Type":"ContainerDied","Data":"61ca17294ea4891f6c26070ce5fd79b235c93de98cdcc09d0687f4cb9dde77f6"} Mar 08 02:08:05 crc kubenswrapper[4762]: I0308 02:08:05.232716 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61ca17294ea4891f6c26070ce5fd79b235c93de98cdcc09d0687f4cb9dde77f6" Mar 08 02:08:05 crc kubenswrapper[4762]: I0308 02:08:05.232475 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548928-z5kfc" Mar 08 02:08:05 crc kubenswrapper[4762]: I0308 02:08:05.735730 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548922-b9s6j"] Mar 08 02:08:05 crc kubenswrapper[4762]: I0308 02:08:05.745702 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548922-b9s6j"] Mar 08 02:08:06 crc kubenswrapper[4762]: I0308 02:08:06.263951 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:08:06 crc kubenswrapper[4762]: E0308 02:08:06.264537 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:07 crc kubenswrapper[4762]: I0308 02:08:07.276223 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42fa3a8-143b-4850-89ce-f63ef728708a" path="/var/lib/kubelet/pods/e42fa3a8-143b-4850-89ce-f63ef728708a/volumes" Mar 08 02:08:07 crc kubenswrapper[4762]: I0308 02:08:07.751630 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/manager/1.log" Mar 08 02:08:07 crc kubenswrapper[4762]: I0308 02:08:07.761670 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/kube-rbac-proxy/0.log" Mar 08 02:08:07 crc kubenswrapper[4762]: I0308 02:08:07.974817 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/manager/0.log" Mar 08 02:08:17 crc kubenswrapper[4762]: I0308 02:08:17.263682 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:08:17 crc kubenswrapper[4762]: E0308 02:08:17.267197 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.352657 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5cnvc_9f4ae992-28ff-440b-885f-2b01a62887d1/prometheus-operator/0.log" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.503203 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_f0e56a85-8dc3-4b03-9dc5-c9cce7682162/prometheus-operator-admission-webhook/0.log" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.586046 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1/prometheus-operator-admission-webhook/0.log" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.667524 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jr6wh_977085a1-8184-4c52-8e8d-6cb64635e335/operator/1.log" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.731327 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jr6wh_977085a1-8184-4c52-8e8d-6cb64635e335/operator/0.log" Mar 08 02:08:23 crc kubenswrapper[4762]: I0308 02:08:23.899481 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-rjj98_5e5e70a6-f33a-4930-9699-83dfa11cf98d/observability-ui-dashboards/0.log" Mar 08 02:08:24 crc kubenswrapper[4762]: I0308 02:08:24.001114 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9ntmw_3082ab77-d932-4350-915b-43172392ba8e/perses-operator/0.log" Mar 08 02:08:30 crc kubenswrapper[4762]: I0308 02:08:30.264149 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:08:30 crc kubenswrapper[4762]: E0308 02:08:30.265201 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:41 crc kubenswrapper[4762]: I0308 02:08:41.946462 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-c769fd969-k82gj_c0dea8af-c19d-492c-a9ab-e271b9edad28/cluster-logging-operator/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.141175 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-x5zk2_8dd9e504-c718-4778-972a-da408fd6c2fe/collector/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.220869 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_4c30f467-b939-4c68-91f0-707c6893e6ff/loki-compactor/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.263963 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:08:42 crc kubenswrapper[4762]: E0308 02:08:42.264525 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.373163 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5d5548c9f5-8fxrr_7d1d5c16-4b49-4abf-8b13-0df0fda43b6a/loki-distributor/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.469602 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-58595d78f8-lmbn4_3be01762-1f06-4534-8426-ab3b41e8e8d8/gateway/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.480949 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-58595d78f8-lmbn4_3be01762-1f06-4534-8426-ab3b41e8e8d8/opa/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.596906 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-58595d78f8-vq8xm_1efe4203-538b-41b7-9e52-832aeceaac3b/gateway/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.698144 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-58595d78f8-vq8xm_1efe4203-538b-41b7-9e52-832aeceaac3b/opa/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.781175 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_748fb55a-dbe2-4b8b-9e08-577495a258a4/loki-index-gateway/0.log" Mar 08 02:08:42 crc kubenswrapper[4762]: I0308 02:08:42.975947 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_306f3a2d-d090-4aad-b84c-05078f5f8be5/loki-ingester/0.log" Mar 08 02:08:43 crc kubenswrapper[4762]: I0308 02:08:43.004098 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76bf7b6d45-nsgkb_6fd90908-2008-4941-ba65-62557823e8a0/loki-querier/0.log" Mar 08 02:08:43 crc kubenswrapper[4762]: I0308 02:08:43.370464 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-6d6859c548-phxp4_6e603ecb-b9b1-4fba-af81-9da07c682395/loki-query-frontend/0.log" Mar 08 02:08:48 crc kubenswrapper[4762]: I0308 02:08:48.464159 4762 scope.go:117] "RemoveContainer" containerID="0799d8760bdb5198f2dd96a304f593e68ab8d2845633b6946d58e3f0cadc802a" Mar 08 02:08:54 crc kubenswrapper[4762]: I0308 02:08:54.264490 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:08:54 crc kubenswrapper[4762]: E0308 02:08:54.265688 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.303921 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-qgc88_0b0d938e-fbb6-4ed9-8822-c87f8ce564e3/kube-rbac-proxy/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.466477 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-qgc88_0b0d938e-fbb6-4ed9-8822-c87f8ce564e3/controller/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.603156 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-frr-files/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.776275 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-frr-files/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.787851 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-metrics/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.792578 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-reloader/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.848826 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-reloader/0.log" Mar 08 02:08:59 crc kubenswrapper[4762]: I0308 02:08:59.989472 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-frr-files/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.017576 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-reloader/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.023099 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-metrics/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.047862 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-metrics/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.215040 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-reloader/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.247818 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-frr-files/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.252181 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/controller/1.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.287931 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/cp-metrics/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.424597 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/controller/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.484926 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/frr/1.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.505708 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/frr-metrics/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.684729 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/kube-rbac-proxy/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.740140 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/reloader/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.748206 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/kube-rbac-proxy-frr/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.976914 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-xrnnz_7a1f5442-2f22-4dff-b59a-0a8233a83b41/frr-k8s-webhook-server/0.log" Mar 08 02:09:00 crc kubenswrapper[4762]: I0308 02:09:00.997342 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-xrnnz_7a1f5442-2f22-4dff-b59a-0a8233a83b41/frr-k8s-webhook-server/1.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.173919 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58b8966548-4d5g2_97490dfa-d4e5-4013-8a53-199f5872ea4c/manager/1.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.227857 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-58b8966548-4d5g2_97490dfa-d4e5-4013-8a53-199f5872ea4c/manager/0.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.394504 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f659bb4d7-nxfzd_cbdc8d75-414a-451a-b594-dc430abfcc09/webhook-server/1.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.456327 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6f659bb4d7-nxfzd_cbdc8d75-414a-451a-b594-dc430abfcc09/webhook-server/0.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.623610 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4j4bt_3cafb56e-d1ea-48b5-9b1c-691e86cba0d9/kube-rbac-proxy/0.log" Mar 08 02:09:01 crc kubenswrapper[4762]: I0308 02:09:01.855638 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4j4bt_3cafb56e-d1ea-48b5-9b1c-691e86cba0d9/speaker/1.log" Mar 08 02:09:02 crc kubenswrapper[4762]: I0308 02:09:02.365802 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4j4bt_3cafb56e-d1ea-48b5-9b1c-691e86cba0d9/speaker/0.log" Mar 08 02:09:02 crc kubenswrapper[4762]: I0308 02:09:02.742840 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-4qgst_35f236f0-d58d-4bb2-a6cd-689097c3fbf4/frr/0.log" Mar 08 02:09:05 crc kubenswrapper[4762]: I0308 02:09:05.264365 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:09:05 crc kubenswrapper[4762]: E0308 02:09:05.265342 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:09:16 crc kubenswrapper[4762]: I0308 02:09:16.786701 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/util/0.log" Mar 08 02:09:16 crc kubenswrapper[4762]: I0308 02:09:16.987867 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/util/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.022411 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.026505 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.184320 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/util/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.184886 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.220312 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a828fvc9_86ebd292-c9a1-4ae0-ab20-192155a862d6/extract/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.377204 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/util/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.505810 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.540273 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/util/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.546212 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.727990 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/util/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.747795 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/pull/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.790036 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_371ee4810f5f68c5176d7257cefd8758df33c232524c25acbf90f69e19wlgls_c97c6c2e-29ab-4045-912c-289db81216bd/extract/0.log" Mar 08 02:09:17 crc kubenswrapper[4762]: I0308 02:09:17.885576 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/util/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.062573 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/util/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.066333 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/pull/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.073798 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/pull/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.256372 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/util/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.265708 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:09:18 crc kubenswrapper[4762]: E0308 02:09:18.266159 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.278233 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/pull/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.340783 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0848fmt_6cb1cb3e-e36d-4101-ad14-2f03a84bfe95/extract/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.481461 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-utilities/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.649089 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-content/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.650986 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-utilities/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.692833 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-content/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.840479 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-utilities/0.log" Mar 08 02:09:18 crc kubenswrapper[4762]: I0308 02:09:18.909556 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/extract-content/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.052845 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-utilities/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.327819 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-utilities/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.348921 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-content/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.395302 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-content/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.556223 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-utilities/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.605367 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/extract-content/0.log" Mar 08 02:09:19 crc kubenswrapper[4762]: I0308 02:09:19.841215 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/util/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.099936 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/util/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.148779 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/pull/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.281115 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-78hq2_4f3c2509-9848-4e76-96ae-8f815f66d6d7/registry-server/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.431684 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/pull/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.591946 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/util/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.647328 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/pull/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.686779 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4llndv_7ef5c038-aa5e-4e6f-ac49-1fae6408849b/extract/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.858682 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rfbxq_0870b34f-2648-451a-a34e-8555e4e4982a/registry-server/0.log" Mar 08 02:09:20 crc kubenswrapper[4762]: I0308 02:09:20.873107 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/util/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.052537 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/util/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.080011 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/pull/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.083606 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/pull/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.285830 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/extract/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.286133 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/util/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.304094 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e2b87168fae98cca1c2d05d26ceb83b1b30b4b54c6968a79bb91e08989x9vfz_4d493483-eff5-4dc1-881c-6dbac66ecffe/pull/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.341390 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sxtbp_45e73cf0-17af-446f-8a92-5c45dee4ee00/marketplace-operator/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.476560 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-utilities/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.614965 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-utilities/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.615360 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-content/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.615491 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-content/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.816936 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-content/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.821941 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/extract-utilities/0.log" Mar 08 02:09:21 crc kubenswrapper[4762]: I0308 02:09:21.875414 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-utilities/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.036224 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-7xslc_b83aab9a-f794-43d3-af07-0a00dac138da/registry-server/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.069959 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-utilities/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.083354 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-content/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.102680 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-content/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.280121 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-content/0.log" Mar 08 02:09:22 crc kubenswrapper[4762]: I0308 02:09:22.285033 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/extract-utilities/0.log" Mar 08 02:09:23 crc kubenswrapper[4762]: I0308 02:09:23.208786 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-8szql_c578d6b5-daa2-4fd3-88ee-29ab82caaa5a/registry-server/0.log" Mar 08 02:09:32 crc kubenswrapper[4762]: I0308 02:09:32.263769 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:09:32 crc kubenswrapper[4762]: E0308 02:09:32.264746 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.340782 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75b9887994-8vm8n_f0e56a85-8dc3-4b03-9dc5-c9cce7682162/prometheus-operator-admission-webhook/0.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.377395 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-5cnvc_9f4ae992-28ff-440b-885f-2b01a62887d1/prometheus-operator/0.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.403923 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-75b9887994-rp4fg_3d6adc3f-581b-489b-9bbd-dbc4e93c54f1/prometheus-operator-admission-webhook/0.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.542407 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jr6wh_977085a1-8184-4c52-8e8d-6cb64635e335/operator/1.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.562830 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-rjj98_5e5e70a6-f33a-4930-9699-83dfa11cf98d/observability-ui-dashboards/0.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.613166 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-jr6wh_977085a1-8184-4c52-8e8d-6cb64635e335/operator/0.log" Mar 08 02:09:36 crc kubenswrapper[4762]: I0308 02:09:36.615450 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9ntmw_3082ab77-d932-4350-915b-43172392ba8e/perses-operator/0.log" Mar 08 02:09:46 crc kubenswrapper[4762]: I0308 02:09:46.264086 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:09:47 crc kubenswrapper[4762]: I0308 02:09:47.454151 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2"} Mar 08 02:09:50 crc kubenswrapper[4762]: I0308 02:09:50.838191 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/manager/0.log" Mar 08 02:09:50 crc kubenswrapper[4762]: I0308 02:09:50.845045 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/manager/1.log" Mar 08 02:09:50 crc kubenswrapper[4762]: I0308 02:09:50.879425 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-55b56f86c9-fm7md_b242b134-d2b7-4e03-a6c1-cd046de89c3d/kube-rbac-proxy/0.log" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.156887 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548930-szg5g"] Mar 08 02:10:00 crc kubenswrapper[4762]: E0308 02:10:00.157996 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97613940-eb08-4bcb-9e99-08dd386e6843" containerName="oc" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.158014 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="97613940-eb08-4bcb-9e99-08dd386e6843" containerName="oc" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.158299 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="97613940-eb08-4bcb-9e99-08dd386e6843" containerName="oc" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.159367 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.165448 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.165831 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.166778 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.168902 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548930-szg5g"] Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.225629 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnp8l\" (UniqueName: \"kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l\") pod \"auto-csr-approver-29548930-szg5g\" (UID: \"b788b3ef-6db5-4fa8-85bf-001b2a121c8e\") " pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.327921 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnp8l\" (UniqueName: \"kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l\") pod \"auto-csr-approver-29548930-szg5g\" (UID: \"b788b3ef-6db5-4fa8-85bf-001b2a121c8e\") " pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.347433 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnp8l\" (UniqueName: \"kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l\") pod \"auto-csr-approver-29548930-szg5g\" (UID: \"b788b3ef-6db5-4fa8-85bf-001b2a121c8e\") " pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:00 crc kubenswrapper[4762]: I0308 02:10:00.490689 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:01 crc kubenswrapper[4762]: I0308 02:10:01.662943 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548930-szg5g"] Mar 08 02:10:01 crc kubenswrapper[4762]: I0308 02:10:01.664492 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 02:10:02 crc kubenswrapper[4762]: I0308 02:10:02.051518 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548930-szg5g" event={"ID":"b788b3ef-6db5-4fa8-85bf-001b2a121c8e","Type":"ContainerStarted","Data":"977ad393caea5e24e2dfde8c345cfeac0f3e60ed6cfb9709f0268be7505a4ae8"} Mar 08 02:10:04 crc kubenswrapper[4762]: I0308 02:10:04.078196 4762 generic.go:334] "Generic (PLEG): container finished" podID="b788b3ef-6db5-4fa8-85bf-001b2a121c8e" containerID="c559c77a8bc7ff51c58ff10626ab91321c88fb5e3a9fe1a8cc57971a3435706a" exitCode=0 Mar 08 02:10:04 crc kubenswrapper[4762]: I0308 02:10:04.078418 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548930-szg5g" event={"ID":"b788b3ef-6db5-4fa8-85bf-001b2a121c8e","Type":"ContainerDied","Data":"c559c77a8bc7ff51c58ff10626ab91321c88fb5e3a9fe1a8cc57971a3435706a"} Mar 08 02:10:05 crc kubenswrapper[4762]: I0308 02:10:05.517474 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:05 crc kubenswrapper[4762]: I0308 02:10:05.659122 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnp8l\" (UniqueName: \"kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l\") pod \"b788b3ef-6db5-4fa8-85bf-001b2a121c8e\" (UID: \"b788b3ef-6db5-4fa8-85bf-001b2a121c8e\") " Mar 08 02:10:05 crc kubenswrapper[4762]: I0308 02:10:05.679963 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l" (OuterVolumeSpecName: "kube-api-access-wnp8l") pod "b788b3ef-6db5-4fa8-85bf-001b2a121c8e" (UID: "b788b3ef-6db5-4fa8-85bf-001b2a121c8e"). InnerVolumeSpecName "kube-api-access-wnp8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:10:05 crc kubenswrapper[4762]: I0308 02:10:05.761477 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnp8l\" (UniqueName: \"kubernetes.io/projected/b788b3ef-6db5-4fa8-85bf-001b2a121c8e-kube-api-access-wnp8l\") on node \"crc\" DevicePath \"\"" Mar 08 02:10:06 crc kubenswrapper[4762]: I0308 02:10:06.101237 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548930-szg5g" event={"ID":"b788b3ef-6db5-4fa8-85bf-001b2a121c8e","Type":"ContainerDied","Data":"977ad393caea5e24e2dfde8c345cfeac0f3e60ed6cfb9709f0268be7505a4ae8"} Mar 08 02:10:06 crc kubenswrapper[4762]: I0308 02:10:06.101278 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548930-szg5g" Mar 08 02:10:06 crc kubenswrapper[4762]: I0308 02:10:06.101296 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="977ad393caea5e24e2dfde8c345cfeac0f3e60ed6cfb9709f0268be7505a4ae8" Mar 08 02:10:06 crc kubenswrapper[4762]: I0308 02:10:06.603178 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548924-k89xp"] Mar 08 02:10:06 crc kubenswrapper[4762]: I0308 02:10:06.624584 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548924-k89xp"] Mar 08 02:10:07 crc kubenswrapper[4762]: I0308 02:10:07.274426 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecaf8110-509c-496a-8ef3-49998975a267" path="/var/lib/kubelet/pods/ecaf8110-509c-496a-8ef3-49998975a267/volumes" Mar 08 02:10:48 crc kubenswrapper[4762]: I0308 02:10:48.594713 4762 scope.go:117] "RemoveContainer" containerID="e388e5be2fd56fb7c37e7094c138b2e5bd3f120b09e7ff2b39aa53d8e437b1ad" Mar 08 02:11:48 crc kubenswrapper[4762]: I0308 02:11:48.775452 4762 scope.go:117] "RemoveContainer" containerID="2164a478b2ef58c28735f96b60d2d3e64ac466666e9ab506a77d6b2fdd906656" Mar 08 02:11:50 crc kubenswrapper[4762]: I0308 02:11:50.568191 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerID="cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428" exitCode=0 Mar 08 02:11:50 crc kubenswrapper[4762]: I0308 02:11:50.568285 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cqww4/must-gather-c5sdj" event={"ID":"0d412f05-a4d8-4b97-be7d-7f78eecd17e9","Type":"ContainerDied","Data":"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428"} Mar 08 02:11:50 crc kubenswrapper[4762]: I0308 02:11:50.569175 4762 scope.go:117] "RemoveContainer" containerID="cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428" Mar 08 02:11:50 crc kubenswrapper[4762]: I0308 02:11:50.757660 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cqww4_must-gather-c5sdj_0d412f05-a4d8-4b97-be7d-7f78eecd17e9/gather/0.log" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.145033 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:11:55 crc kubenswrapper[4762]: E0308 02:11:55.147236 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b788b3ef-6db5-4fa8-85bf-001b2a121c8e" containerName="oc" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.147276 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="b788b3ef-6db5-4fa8-85bf-001b2a121c8e" containerName="oc" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.148169 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="b788b3ef-6db5-4fa8-85bf-001b2a121c8e" containerName="oc" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.153507 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.182419 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.323804 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd626\" (UniqueName: \"kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.324081 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.324144 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.427198 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd626\" (UniqueName: \"kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.427325 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.427353 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.427957 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.428010 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.448702 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd626\" (UniqueName: \"kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626\") pod \"redhat-marketplace-c6k75\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.488470 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:11:55 crc kubenswrapper[4762]: I0308 02:11:55.979660 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:11:56 crc kubenswrapper[4762]: I0308 02:11:56.658030 4762 generic.go:334] "Generic (PLEG): container finished" podID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerID="bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693" exitCode=0 Mar 08 02:11:56 crc kubenswrapper[4762]: I0308 02:11:56.658106 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerDied","Data":"bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693"} Mar 08 02:11:56 crc kubenswrapper[4762]: I0308 02:11:56.658357 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerStarted","Data":"17984abfab75ef8caf5d249b251fa809aff85a45fed00ca7f61df87b91286bb7"} Mar 08 02:11:57 crc kubenswrapper[4762]: I0308 02:11:57.674574 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerStarted","Data":"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0"} Mar 08 02:11:58 crc kubenswrapper[4762]: I0308 02:11:58.686247 4762 generic.go:334] "Generic (PLEG): container finished" podID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerID="2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0" exitCode=0 Mar 08 02:11:58 crc kubenswrapper[4762]: I0308 02:11:58.686301 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerDied","Data":"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0"} Mar 08 02:11:58 crc kubenswrapper[4762]: I0308 02:11:58.928151 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-cqww4/must-gather-c5sdj"] Mar 08 02:11:58 crc kubenswrapper[4762]: I0308 02:11:58.928575 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-cqww4/must-gather-c5sdj" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="copy" containerID="cri-o://b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19" gracePeriod=2 Mar 08 02:11:58 crc kubenswrapper[4762]: I0308 02:11:58.942512 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-cqww4/must-gather-c5sdj"] Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.444630 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cqww4_must-gather-c5sdj_0d412f05-a4d8-4b97-be7d-7f78eecd17e9/copy/0.log" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.445589 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.535594 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj7nc\" (UniqueName: \"kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc\") pod \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.536150 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output\") pod \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\" (UID: \"0d412f05-a4d8-4b97-be7d-7f78eecd17e9\") " Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.543203 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc" (OuterVolumeSpecName: "kube-api-access-gj7nc") pod "0d412f05-a4d8-4b97-be7d-7f78eecd17e9" (UID: "0d412f05-a4d8-4b97-be7d-7f78eecd17e9"). InnerVolumeSpecName "kube-api-access-gj7nc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.641742 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj7nc\" (UniqueName: \"kubernetes.io/projected/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-kube-api-access-gj7nc\") on node \"crc\" DevicePath \"\"" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.703192 4762 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-cqww4_must-gather-c5sdj_0d412f05-a4d8-4b97-be7d-7f78eecd17e9/copy/0.log" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.703597 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerID="b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19" exitCode=143 Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.703696 4762 scope.go:117] "RemoveContainer" containerID="b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.703909 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cqww4/must-gather-c5sdj" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.709035 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerStarted","Data":"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f"} Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.738546 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d412f05-a4d8-4b97-be7d-7f78eecd17e9" (UID: "0d412f05-a4d8-4b97-be7d-7f78eecd17e9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.743836 4762 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d412f05-a4d8-4b97-be7d-7f78eecd17e9-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.744309 4762 scope.go:117] "RemoveContainer" containerID="cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.748665 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c6k75" podStartSLOduration=2.312685869 podStartE2EDuration="4.748642431s" podCreationTimestamp="2026-03-08 02:11:55 +0000 UTC" firstStartedPulling="2026-03-08 02:11:56.660519202 +0000 UTC m=+6538.134663546" lastFinishedPulling="2026-03-08 02:11:59.096475764 +0000 UTC m=+6540.570620108" observedRunningTime="2026-03-08 02:11:59.733407543 +0000 UTC m=+6541.207551897" watchObservedRunningTime="2026-03-08 02:11:59.748642431 +0000 UTC m=+6541.222786775" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.791130 4762 scope.go:117] "RemoveContainer" containerID="b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19" Mar 08 02:11:59 crc kubenswrapper[4762]: E0308 02:11:59.794565 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19\": container with ID starting with b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19 not found: ID does not exist" containerID="b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.794620 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19"} err="failed to get container status \"b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19\": rpc error: code = NotFound desc = could not find container \"b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19\": container with ID starting with b79d218670a2967db4fff2c0d122c3c809fff030a7a8c0b45c30bc7d12be3c19 not found: ID does not exist" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.794648 4762 scope.go:117] "RemoveContainer" containerID="cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428" Mar 08 02:11:59 crc kubenswrapper[4762]: E0308 02:11:59.798011 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428\": container with ID starting with cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428 not found: ID does not exist" containerID="cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428" Mar 08 02:11:59 crc kubenswrapper[4762]: I0308 02:11:59.798066 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428"} err="failed to get container status \"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428\": rpc error: code = NotFound desc = could not find container \"cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428\": container with ID starting with cd62354c41a32e6861ef8bf46fecd29ab29b7de8ce71b4a1da49188b39209428 not found: ID does not exist" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.154090 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548932-vms5g"] Mar 08 02:12:00 crc kubenswrapper[4762]: E0308 02:12:00.154912 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="gather" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.154941 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="gather" Mar 08 02:12:00 crc kubenswrapper[4762]: E0308 02:12:00.155012 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="copy" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.155020 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="copy" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.155289 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="copy" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.155312 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" containerName="gather" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.157404 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.160339 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.160547 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.160688 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.173568 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548932-vms5g"] Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.255793 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8rj\" (UniqueName: \"kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj\") pod \"auto-csr-approver-29548932-vms5g\" (UID: \"981a36e4-a9ba-467d-a3f7-517e67e41f90\") " pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.358625 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8rj\" (UniqueName: \"kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj\") pod \"auto-csr-approver-29548932-vms5g\" (UID: \"981a36e4-a9ba-467d-a3f7-517e67e41f90\") " pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.374048 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8rj\" (UniqueName: \"kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj\") pod \"auto-csr-approver-29548932-vms5g\" (UID: \"981a36e4-a9ba-467d-a3f7-517e67e41f90\") " pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:00 crc kubenswrapper[4762]: I0308 02:12:00.512967 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:01 crc kubenswrapper[4762]: I0308 02:12:01.020910 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548932-vms5g"] Mar 08 02:12:01 crc kubenswrapper[4762]: W0308 02:12:01.021365 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod981a36e4_a9ba_467d_a3f7_517e67e41f90.slice/crio-cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d WatchSource:0}: Error finding container cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d: Status 404 returned error can't find the container with id cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d Mar 08 02:12:01 crc kubenswrapper[4762]: I0308 02:12:01.280605 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d412f05-a4d8-4b97-be7d-7f78eecd17e9" path="/var/lib/kubelet/pods/0d412f05-a4d8-4b97-be7d-7f78eecd17e9/volumes" Mar 08 02:12:01 crc kubenswrapper[4762]: I0308 02:12:01.735076 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548932-vms5g" event={"ID":"981a36e4-a9ba-467d-a3f7-517e67e41f90","Type":"ContainerStarted","Data":"cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d"} Mar 08 02:12:02 crc kubenswrapper[4762]: I0308 02:12:02.751186 4762 generic.go:334] "Generic (PLEG): container finished" podID="981a36e4-a9ba-467d-a3f7-517e67e41f90" containerID="d444daafed913274e332756beb96ef221170cb482de67fd3c9879ee20d566c00" exitCode=0 Mar 08 02:12:02 crc kubenswrapper[4762]: I0308 02:12:02.751573 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548932-vms5g" event={"ID":"981a36e4-a9ba-467d-a3f7-517e67e41f90","Type":"ContainerDied","Data":"d444daafed913274e332756beb96ef221170cb482de67fd3c9879ee20d566c00"} Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.244984 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.357961 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8rj\" (UniqueName: \"kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj\") pod \"981a36e4-a9ba-467d-a3f7-517e67e41f90\" (UID: \"981a36e4-a9ba-467d-a3f7-517e67e41f90\") " Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.365287 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj" (OuterVolumeSpecName: "kube-api-access-gv8rj") pod "981a36e4-a9ba-467d-a3f7-517e67e41f90" (UID: "981a36e4-a9ba-467d-a3f7-517e67e41f90"). InnerVolumeSpecName "kube-api-access-gv8rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.460782 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8rj\" (UniqueName: \"kubernetes.io/projected/981a36e4-a9ba-467d-a3f7-517e67e41f90-kube-api-access-gv8rj\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.776387 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548932-vms5g" event={"ID":"981a36e4-a9ba-467d-a3f7-517e67e41f90","Type":"ContainerDied","Data":"cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d"} Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.776697 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde40d9ec2cb5493222343f3a6b4ad263dd8991e1c088da5c237966039ac768d" Mar 08 02:12:04 crc kubenswrapper[4762]: I0308 02:12:04.776751 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548932-vms5g" Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.336688 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548926-snvmr"] Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.353926 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548926-snvmr"] Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.489217 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.489297 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.545219 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.868928 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:05 crc kubenswrapper[4762]: I0308 02:12:05.968434 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:12:07 crc kubenswrapper[4762]: I0308 02:12:07.281667 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f" path="/var/lib/kubelet/pods/3e6e24dc-fb0a-4da3-ad97-0c3d9611ef9f/volumes" Mar 08 02:12:07 crc kubenswrapper[4762]: I0308 02:12:07.825342 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c6k75" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="registry-server" containerID="cri-o://fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f" gracePeriod=2 Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.503067 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.678821 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities\") pod \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.678974 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd626\" (UniqueName: \"kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626\") pod \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.679303 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content\") pod \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\" (UID: \"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2\") " Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.679541 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities" (OuterVolumeSpecName: "utilities") pod "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" (UID: "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.680218 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.684075 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626" (OuterVolumeSpecName: "kube-api-access-qd626") pod "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" (UID: "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2"). InnerVolumeSpecName "kube-api-access-qd626". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.702030 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" (UID: "3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.782851 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.782883 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd626\" (UniqueName: \"kubernetes.io/projected/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2-kube-api-access-qd626\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.840106 4762 generic.go:334] "Generic (PLEG): container finished" podID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerID="fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f" exitCode=0 Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.840147 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerDied","Data":"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f"} Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.840173 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c6k75" event={"ID":"3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2","Type":"ContainerDied","Data":"17984abfab75ef8caf5d249b251fa809aff85a45fed00ca7f61df87b91286bb7"} Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.840179 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c6k75" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.840190 4762 scope.go:117] "RemoveContainer" containerID="fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.872800 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.878889 4762 scope.go:117] "RemoveContainer" containerID="2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.886899 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c6k75"] Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.918301 4762 scope.go:117] "RemoveContainer" containerID="bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.983848 4762 scope.go:117] "RemoveContainer" containerID="fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f" Mar 08 02:12:08 crc kubenswrapper[4762]: E0308 02:12:08.984390 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f\": container with ID starting with fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f not found: ID does not exist" containerID="fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.984419 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f"} err="failed to get container status \"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f\": rpc error: code = NotFound desc = could not find container \"fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f\": container with ID starting with fbc5c520148f1e574674b02fc4625c778c1790471864de25222b78b3b3b2909f not found: ID does not exist" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.984477 4762 scope.go:117] "RemoveContainer" containerID="2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0" Mar 08 02:12:08 crc kubenswrapper[4762]: E0308 02:12:08.984818 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0\": container with ID starting with 2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0 not found: ID does not exist" containerID="2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.984839 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0"} err="failed to get container status \"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0\": rpc error: code = NotFound desc = could not find container \"2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0\": container with ID starting with 2fcec6206d7bcc276b74b5101a3b0383081033ea39527ec98de2307d0e4252e0 not found: ID does not exist" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.984852 4762 scope.go:117] "RemoveContainer" containerID="bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693" Mar 08 02:12:08 crc kubenswrapper[4762]: E0308 02:12:08.985032 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693\": container with ID starting with bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693 not found: ID does not exist" containerID="bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693" Mar 08 02:12:08 crc kubenswrapper[4762]: I0308 02:12:08.985052 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693"} err="failed to get container status \"bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693\": rpc error: code = NotFound desc = could not find container \"bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693\": container with ID starting with bfa2d9a8a1029cc2e560cde6c0aa56ba963e9ee09b012b2061ec9bc3aa704693 not found: ID does not exist" Mar 08 02:12:09 crc kubenswrapper[4762]: I0308 02:12:09.282744 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" path="/var/lib/kubelet/pods/3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2/volumes" Mar 08 02:12:12 crc kubenswrapper[4762]: I0308 02:12:12.852294 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:12:12 crc kubenswrapper[4762]: I0308 02:12:12.852869 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.356454 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:12:41 crc kubenswrapper[4762]: E0308 02:12:41.357686 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="extract-content" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.357702 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="extract-content" Mar 08 02:12:41 crc kubenswrapper[4762]: E0308 02:12:41.357731 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="registry-server" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.357738 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="registry-server" Mar 08 02:12:41 crc kubenswrapper[4762]: E0308 02:12:41.357748 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="981a36e4-a9ba-467d-a3f7-517e67e41f90" containerName="oc" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.357783 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="981a36e4-a9ba-467d-a3f7-517e67e41f90" containerName="oc" Mar 08 02:12:41 crc kubenswrapper[4762]: E0308 02:12:41.357828 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="extract-utilities" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.357837 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="extract-utilities" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.358082 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bf919c1-6da6-4f7c-ae0d-7e4b8ad3cba2" containerName="registry-server" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.358107 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="981a36e4-a9ba-467d-a3f7-517e67e41f90" containerName="oc" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.359962 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.390390 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.539959 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czxpg\" (UniqueName: \"kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.540317 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.540364 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.673946 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czxpg\" (UniqueName: \"kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.674065 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.674091 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.674749 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.682462 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.699497 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czxpg\" (UniqueName: \"kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg\") pod \"redhat-operators-b9zmx\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:41 crc kubenswrapper[4762]: I0308 02:12:41.980910 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:42 crc kubenswrapper[4762]: I0308 02:12:42.524117 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:12:42 crc kubenswrapper[4762]: I0308 02:12:42.851557 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:12:42 crc kubenswrapper[4762]: I0308 02:12:42.851845 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:12:43 crc kubenswrapper[4762]: I0308 02:12:43.335884 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerID="51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc" exitCode=0 Mar 08 02:12:43 crc kubenswrapper[4762]: I0308 02:12:43.335938 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerDied","Data":"51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc"} Mar 08 02:12:43 crc kubenswrapper[4762]: I0308 02:12:43.335979 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerStarted","Data":"eda58ace84a7b7dfa5d1671a85c8b3ce1146c784585503b1b64f1b9bddc0f122"} Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.351344 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerStarted","Data":"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2"} Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.514789 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.521355 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.545264 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.547342 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.547374 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.547541 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj7b7\" (UniqueName: \"kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.649214 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj7b7\" (UniqueName: \"kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.649299 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.649323 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.649943 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.649999 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.672729 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj7b7\" (UniqueName: \"kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7\") pod \"certified-operators-z7md9\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:44 crc kubenswrapper[4762]: I0308 02:12:44.840476 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:45 crc kubenswrapper[4762]: I0308 02:12:45.340453 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:45 crc kubenswrapper[4762]: I0308 02:12:45.373435 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerStarted","Data":"bf33cde103c8b7873816fab3fcfe228bd78152c6715fcfc1dbdef22d92874a4b"} Mar 08 02:12:46 crc kubenswrapper[4762]: I0308 02:12:46.387600 4762 generic.go:334] "Generic (PLEG): container finished" podID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerID="a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158" exitCode=0 Mar 08 02:12:46 crc kubenswrapper[4762]: I0308 02:12:46.387698 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerDied","Data":"a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158"} Mar 08 02:12:47 crc kubenswrapper[4762]: I0308 02:12:47.425862 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerStarted","Data":"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e"} Mar 08 02:12:48 crc kubenswrapper[4762]: I0308 02:12:48.882194 4762 scope.go:117] "RemoveContainer" containerID="e6cfe622d3b847d080c5a2e6e366d96117ce23d25bc53d8179c3c19d8e7ff671" Mar 08 02:12:49 crc kubenswrapper[4762]: I0308 02:12:49.449046 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerID="7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2" exitCode=0 Mar 08 02:12:49 crc kubenswrapper[4762]: I0308 02:12:49.449144 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerDied","Data":"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2"} Mar 08 02:12:49 crc kubenswrapper[4762]: I0308 02:12:49.451934 4762 generic.go:334] "Generic (PLEG): container finished" podID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerID="269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e" exitCode=0 Mar 08 02:12:49 crc kubenswrapper[4762]: I0308 02:12:49.451987 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerDied","Data":"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e"} Mar 08 02:12:50 crc kubenswrapper[4762]: I0308 02:12:50.466804 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerStarted","Data":"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2"} Mar 08 02:12:50 crc kubenswrapper[4762]: I0308 02:12:50.469082 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerStarted","Data":"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b"} Mar 08 02:12:50 crc kubenswrapper[4762]: I0308 02:12:50.496284 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b9zmx" podStartSLOduration=2.930280105 podStartE2EDuration="9.496263822s" podCreationTimestamp="2026-03-08 02:12:41 +0000 UTC" firstStartedPulling="2026-03-08 02:12:43.337662171 +0000 UTC m=+6584.811806515" lastFinishedPulling="2026-03-08 02:12:49.903645888 +0000 UTC m=+6591.377790232" observedRunningTime="2026-03-08 02:12:50.482509898 +0000 UTC m=+6591.956654242" watchObservedRunningTime="2026-03-08 02:12:50.496263822 +0000 UTC m=+6591.970408166" Mar 08 02:12:50 crc kubenswrapper[4762]: I0308 02:12:50.510556 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z7md9" podStartSLOduration=3.017966241 podStartE2EDuration="6.510533381s" podCreationTimestamp="2026-03-08 02:12:44 +0000 UTC" firstStartedPulling="2026-03-08 02:12:46.389995629 +0000 UTC m=+6587.864139973" lastFinishedPulling="2026-03-08 02:12:49.882562769 +0000 UTC m=+6591.356707113" observedRunningTime="2026-03-08 02:12:50.50626579 +0000 UTC m=+6591.980410144" watchObservedRunningTime="2026-03-08 02:12:50.510533381 +0000 UTC m=+6591.984677725" Mar 08 02:12:51 crc kubenswrapper[4762]: I0308 02:12:51.981575 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:51 crc kubenswrapper[4762]: I0308 02:12:51.981657 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:12:53 crc kubenswrapper[4762]: I0308 02:12:53.037057 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9zmx" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" probeResult="failure" output=< Mar 08 02:12:53 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:12:53 crc kubenswrapper[4762]: > Mar 08 02:12:54 crc kubenswrapper[4762]: I0308 02:12:54.841155 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:54 crc kubenswrapper[4762]: I0308 02:12:54.841542 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:54 crc kubenswrapper[4762]: I0308 02:12:54.894775 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:55 crc kubenswrapper[4762]: I0308 02:12:55.582099 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:55 crc kubenswrapper[4762]: I0308 02:12:55.660037 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:57 crc kubenswrapper[4762]: I0308 02:12:57.562260 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z7md9" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="registry-server" containerID="cri-o://eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b" gracePeriod=2 Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.083420 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.186514 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities\") pod \"971d6b8b-7db5-4083-a7ce-3789520e3e13\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.186860 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj7b7\" (UniqueName: \"kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7\") pod \"971d6b8b-7db5-4083-a7ce-3789520e3e13\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.187036 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content\") pod \"971d6b8b-7db5-4083-a7ce-3789520e3e13\" (UID: \"971d6b8b-7db5-4083-a7ce-3789520e3e13\") " Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.187368 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities" (OuterVolumeSpecName: "utilities") pod "971d6b8b-7db5-4083-a7ce-3789520e3e13" (UID: "971d6b8b-7db5-4083-a7ce-3789520e3e13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.187735 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.193377 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7" (OuterVolumeSpecName: "kube-api-access-bj7b7") pod "971d6b8b-7db5-4083-a7ce-3789520e3e13" (UID: "971d6b8b-7db5-4083-a7ce-3789520e3e13"). InnerVolumeSpecName "kube-api-access-bj7b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.253712 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "971d6b8b-7db5-4083-a7ce-3789520e3e13" (UID: "971d6b8b-7db5-4083-a7ce-3789520e3e13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.289973 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj7b7\" (UniqueName: \"kubernetes.io/projected/971d6b8b-7db5-4083-a7ce-3789520e3e13-kube-api-access-bj7b7\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.289999 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/971d6b8b-7db5-4083-a7ce-3789520e3e13-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.580101 4762 generic.go:334] "Generic (PLEG): container finished" podID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerID="eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b" exitCode=0 Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.580172 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z7md9" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.580193 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerDied","Data":"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b"} Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.580451 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z7md9" event={"ID":"971d6b8b-7db5-4083-a7ce-3789520e3e13","Type":"ContainerDied","Data":"bf33cde103c8b7873816fab3fcfe228bd78152c6715fcfc1dbdef22d92874a4b"} Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.580496 4762 scope.go:117] "RemoveContainer" containerID="eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.606953 4762 scope.go:117] "RemoveContainer" containerID="269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.635860 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.639415 4762 scope.go:117] "RemoveContainer" containerID="a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.648154 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z7md9"] Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.709476 4762 scope.go:117] "RemoveContainer" containerID="eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b" Mar 08 02:12:58 crc kubenswrapper[4762]: E0308 02:12:58.709988 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b\": container with ID starting with eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b not found: ID does not exist" containerID="eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.710032 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b"} err="failed to get container status \"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b\": rpc error: code = NotFound desc = could not find container \"eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b\": container with ID starting with eb766221a34cbd07493106a7ad9d2f8686f06c2d7dc7f63b4734da099e9f2b0b not found: ID does not exist" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.710060 4762 scope.go:117] "RemoveContainer" containerID="269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e" Mar 08 02:12:58 crc kubenswrapper[4762]: E0308 02:12:58.710400 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e\": container with ID starting with 269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e not found: ID does not exist" containerID="269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.710439 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e"} err="failed to get container status \"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e\": rpc error: code = NotFound desc = could not find container \"269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e\": container with ID starting with 269992fa0ba498f478d66912138c98c62ab64de930169e3d0aaad1d82679bd5e not found: ID does not exist" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.710467 4762 scope.go:117] "RemoveContainer" containerID="a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158" Mar 08 02:12:58 crc kubenswrapper[4762]: E0308 02:12:58.710707 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158\": container with ID starting with a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158 not found: ID does not exist" containerID="a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158" Mar 08 02:12:58 crc kubenswrapper[4762]: I0308 02:12:58.710731 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158"} err="failed to get container status \"a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158\": rpc error: code = NotFound desc = could not find container \"a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158\": container with ID starting with a75d726ce61ef7d8a808560089c60e047c2691828c48f500ebf567778b0d6158 not found: ID does not exist" Mar 08 02:12:59 crc kubenswrapper[4762]: I0308 02:12:59.277271 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" path="/var/lib/kubelet/pods/971d6b8b-7db5-4083-a7ce-3789520e3e13/volumes" Mar 08 02:13:03 crc kubenswrapper[4762]: I0308 02:13:03.049514 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9zmx" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" probeResult="failure" output=< Mar 08 02:13:03 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:13:03 crc kubenswrapper[4762]: > Mar 08 02:13:12 crc kubenswrapper[4762]: I0308 02:13:12.851666 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:13:12 crc kubenswrapper[4762]: I0308 02:13:12.852325 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:13:12 crc kubenswrapper[4762]: I0308 02:13:12.852378 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 02:13:12 crc kubenswrapper[4762]: I0308 02:13:12.852878 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 02:13:12 crc kubenswrapper[4762]: I0308 02:13:12.852932 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2" gracePeriod=600 Mar 08 02:13:13 crc kubenswrapper[4762]: I0308 02:13:13.065104 4762 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-b9zmx" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" probeResult="failure" output=< Mar 08 02:13:13 crc kubenswrapper[4762]: timeout: failed to connect service ":50051" within 1s Mar 08 02:13:13 crc kubenswrapper[4762]: > Mar 08 02:13:13 crc kubenswrapper[4762]: I0308 02:13:13.772235 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2" exitCode=0 Mar 08 02:13:13 crc kubenswrapper[4762]: I0308 02:13:13.772332 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2"} Mar 08 02:13:13 crc kubenswrapper[4762]: I0308 02:13:13.772633 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerStarted","Data":"c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3"} Mar 08 02:13:13 crc kubenswrapper[4762]: I0308 02:13:13.772668 4762 scope.go:117] "RemoveContainer" containerID="365ee660fe6668b35ebd1a709362bf25b23635df40ff091925c14ae98d914fad" Mar 08 02:13:22 crc kubenswrapper[4762]: I0308 02:13:22.066654 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:13:22 crc kubenswrapper[4762]: I0308 02:13:22.165908 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:13:22 crc kubenswrapper[4762]: I0308 02:13:22.331054 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:13:23 crc kubenswrapper[4762]: I0308 02:13:23.964638 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b9zmx" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" containerID="cri-o://7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2" gracePeriod=2 Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.601787 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.769439 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content\") pod \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.769703 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities\") pod \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.769871 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czxpg\" (UniqueName: \"kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg\") pod \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\" (UID: \"0d52435e-c857-4ebe-aaed-bee6a074bfc9\") " Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.770634 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities" (OuterVolumeSpecName: "utilities") pod "0d52435e-c857-4ebe-aaed-bee6a074bfc9" (UID: "0d52435e-c857-4ebe-aaed-bee6a074bfc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.774795 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg" (OuterVolumeSpecName: "kube-api-access-czxpg") pod "0d52435e-c857-4ebe-aaed-bee6a074bfc9" (UID: "0d52435e-c857-4ebe-aaed-bee6a074bfc9"). InnerVolumeSpecName "kube-api-access-czxpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.872395 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czxpg\" (UniqueName: \"kubernetes.io/projected/0d52435e-c857-4ebe-aaed-bee6a074bfc9-kube-api-access-czxpg\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.872428 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.924919 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d52435e-c857-4ebe-aaed-bee6a074bfc9" (UID: "0d52435e-c857-4ebe-aaed-bee6a074bfc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.974831 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d52435e-c857-4ebe-aaed-bee6a074bfc9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.976881 4762 generic.go:334] "Generic (PLEG): container finished" podID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerID="7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2" exitCode=0 Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.976926 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b9zmx" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.976919 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerDied","Data":"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2"} Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.977127 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b9zmx" event={"ID":"0d52435e-c857-4ebe-aaed-bee6a074bfc9","Type":"ContainerDied","Data":"eda58ace84a7b7dfa5d1671a85c8b3ce1146c784585503b1b64f1b9bddc0f122"} Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.977162 4762 scope.go:117] "RemoveContainer" containerID="7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2" Mar 08 02:13:24 crc kubenswrapper[4762]: I0308 02:13:24.997980 4762 scope.go:117] "RemoveContainer" containerID="7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.018868 4762 scope.go:117] "RemoveContainer" containerID="51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.101526 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.110747 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b9zmx"] Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.111026 4762 scope.go:117] "RemoveContainer" containerID="7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2" Mar 08 02:13:25 crc kubenswrapper[4762]: E0308 02:13:25.111447 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2\": container with ID starting with 7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2 not found: ID does not exist" containerID="7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.111508 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2"} err="failed to get container status \"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2\": rpc error: code = NotFound desc = could not find container \"7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2\": container with ID starting with 7c109d689b66e6833446ccea5d216d153744bb2577b97764a836bd74268d6ab2 not found: ID does not exist" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.111548 4762 scope.go:117] "RemoveContainer" containerID="7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2" Mar 08 02:13:25 crc kubenswrapper[4762]: E0308 02:13:25.119104 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2\": container with ID starting with 7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2 not found: ID does not exist" containerID="7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.119133 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2"} err="failed to get container status \"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2\": rpc error: code = NotFound desc = could not find container \"7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2\": container with ID starting with 7b0f7430687aff1318fcadd799a567d4b846ed1454c73efffa6a53317a66c4e2 not found: ID does not exist" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.119157 4762 scope.go:117] "RemoveContainer" containerID="51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc" Mar 08 02:13:25 crc kubenswrapper[4762]: E0308 02:13:25.119561 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc\": container with ID starting with 51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc not found: ID does not exist" containerID="51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.119606 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc"} err="failed to get container status \"51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc\": rpc error: code = NotFound desc = could not find container \"51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc\": container with ID starting with 51621255fadd3fd887579214c74dea37ab67f49001791485d6f8a86fa9c60abc not found: ID does not exist" Mar 08 02:13:25 crc kubenswrapper[4762]: I0308 02:13:25.279569 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" path="/var/lib/kubelet/pods/0d52435e-c857-4ebe-aaed-bee6a074bfc9/volumes" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.517397 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519047 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519082 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519164 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="extract-utilities" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519178 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="extract-utilities" Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519196 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="extract-content" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519209 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="extract-content" Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519250 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519267 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519306 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="extract-utilities" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519322 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="extract-utilities" Mar 08 02:13:34 crc kubenswrapper[4762]: E0308 02:13:34.519348 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="extract-content" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519364 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="extract-content" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.519915 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d52435e-c857-4ebe-aaed-bee6a074bfc9" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.520128 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="971d6b8b-7db5-4083-a7ce-3789520e3e13" containerName="registry-server" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.523450 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.534599 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.646099 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.646460 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvwfn\" (UniqueName: \"kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.646837 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.748485 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvwfn\" (UniqueName: \"kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.748593 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.748710 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.749256 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.749279 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.775338 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvwfn\" (UniqueName: \"kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn\") pod \"community-operators-ntvjq\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:34 crc kubenswrapper[4762]: I0308 02:13:34.856515 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:35 crc kubenswrapper[4762]: I0308 02:13:35.371510 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:35 crc kubenswrapper[4762]: W0308 02:13:35.377429 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod090d4e29_48e4_41f9_b696_79bcca15cc5d.slice/crio-efb32f2787789c537d2edf46b00734748187bd8899b045be3fd78a0ef5b73cd0 WatchSource:0}: Error finding container efb32f2787789c537d2edf46b00734748187bd8899b045be3fd78a0ef5b73cd0: Status 404 returned error can't find the container with id efb32f2787789c537d2edf46b00734748187bd8899b045be3fd78a0ef5b73cd0 Mar 08 02:13:36 crc kubenswrapper[4762]: I0308 02:13:36.122683 4762 generic.go:334] "Generic (PLEG): container finished" podID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerID="74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd" exitCode=0 Mar 08 02:13:36 crc kubenswrapper[4762]: I0308 02:13:36.122805 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerDied","Data":"74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd"} Mar 08 02:13:36 crc kubenswrapper[4762]: I0308 02:13:36.123428 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerStarted","Data":"efb32f2787789c537d2edf46b00734748187bd8899b045be3fd78a0ef5b73cd0"} Mar 08 02:13:38 crc kubenswrapper[4762]: I0308 02:13:38.150184 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerStarted","Data":"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0"} Mar 08 02:13:39 crc kubenswrapper[4762]: I0308 02:13:39.166744 4762 generic.go:334] "Generic (PLEG): container finished" podID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerID="c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0" exitCode=0 Mar 08 02:13:39 crc kubenswrapper[4762]: I0308 02:13:39.166930 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerDied","Data":"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0"} Mar 08 02:13:40 crc kubenswrapper[4762]: I0308 02:13:40.179117 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerStarted","Data":"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc"} Mar 08 02:13:40 crc kubenswrapper[4762]: I0308 02:13:40.204940 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ntvjq" podStartSLOduration=2.733502887 podStartE2EDuration="6.204916626s" podCreationTimestamp="2026-03-08 02:13:34 +0000 UTC" firstStartedPulling="2026-03-08 02:13:36.125240571 +0000 UTC m=+6637.599384905" lastFinishedPulling="2026-03-08 02:13:39.59665426 +0000 UTC m=+6641.070798644" observedRunningTime="2026-03-08 02:13:40.195271559 +0000 UTC m=+6641.669415903" watchObservedRunningTime="2026-03-08 02:13:40.204916626 +0000 UTC m=+6641.679060980" Mar 08 02:13:44 crc kubenswrapper[4762]: I0308 02:13:44.857192 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:44 crc kubenswrapper[4762]: I0308 02:13:44.858062 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:44 crc kubenswrapper[4762]: I0308 02:13:44.952627 4762 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:45 crc kubenswrapper[4762]: I0308 02:13:45.317427 4762 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:45 crc kubenswrapper[4762]: I0308 02:13:45.393457 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:47 crc kubenswrapper[4762]: I0308 02:13:47.302079 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ntvjq" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="registry-server" containerID="cri-o://8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc" gracePeriod=2 Mar 08 02:13:47 crc kubenswrapper[4762]: I0308 02:13:47.919041 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.002552 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content\") pod \"090d4e29-48e4-41f9-b696-79bcca15cc5d\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.002657 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities\") pod \"090d4e29-48e4-41f9-b696-79bcca15cc5d\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.002701 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvwfn\" (UniqueName: \"kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn\") pod \"090d4e29-48e4-41f9-b696-79bcca15cc5d\" (UID: \"090d4e29-48e4-41f9-b696-79bcca15cc5d\") " Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.003353 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities" (OuterVolumeSpecName: "utilities") pod "090d4e29-48e4-41f9-b696-79bcca15cc5d" (UID: "090d4e29-48e4-41f9-b696-79bcca15cc5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.006440 4762 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-utilities\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.009338 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn" (OuterVolumeSpecName: "kube-api-access-rvwfn") pod "090d4e29-48e4-41f9-b696-79bcca15cc5d" (UID: "090d4e29-48e4-41f9-b696-79bcca15cc5d"). InnerVolumeSpecName "kube-api-access-rvwfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.058591 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "090d4e29-48e4-41f9-b696-79bcca15cc5d" (UID: "090d4e29-48e4-41f9-b696-79bcca15cc5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.108351 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvwfn\" (UniqueName: \"kubernetes.io/projected/090d4e29-48e4-41f9-b696-79bcca15cc5d-kube-api-access-rvwfn\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.108383 4762 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/090d4e29-48e4-41f9-b696-79bcca15cc5d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.315884 4762 generic.go:334] "Generic (PLEG): container finished" podID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerID="8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc" exitCode=0 Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.315950 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerDied","Data":"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc"} Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.316039 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ntvjq" event={"ID":"090d4e29-48e4-41f9-b696-79bcca15cc5d","Type":"ContainerDied","Data":"efb32f2787789c537d2edf46b00734748187bd8899b045be3fd78a0ef5b73cd0"} Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.316070 4762 scope.go:117] "RemoveContainer" containerID="8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.316063 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ntvjq" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.346477 4762 scope.go:117] "RemoveContainer" containerID="c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.377474 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.382081 4762 scope.go:117] "RemoveContainer" containerID="74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.391231 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ntvjq"] Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.435900 4762 scope.go:117] "RemoveContainer" containerID="8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc" Mar 08 02:13:48 crc kubenswrapper[4762]: E0308 02:13:48.437182 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc\": container with ID starting with 8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc not found: ID does not exist" containerID="8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.437234 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc"} err="failed to get container status \"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc\": rpc error: code = NotFound desc = could not find container \"8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc\": container with ID starting with 8216cf1a3398b4aa1a1d137f91754ae35a00c04bcdde479647e614015a4af5cc not found: ID does not exist" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.437268 4762 scope.go:117] "RemoveContainer" containerID="c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0" Mar 08 02:13:48 crc kubenswrapper[4762]: E0308 02:13:48.438033 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0\": container with ID starting with c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0 not found: ID does not exist" containerID="c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.438081 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0"} err="failed to get container status \"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0\": rpc error: code = NotFound desc = could not find container \"c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0\": container with ID starting with c8ebf6f6f01cb2dad316b04eeaaca6a264b04d6ef2768b9c79c5119b11b063b0 not found: ID does not exist" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.438112 4762 scope.go:117] "RemoveContainer" containerID="74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd" Mar 08 02:13:48 crc kubenswrapper[4762]: E0308 02:13:48.438495 4762 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd\": container with ID starting with 74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd not found: ID does not exist" containerID="74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd" Mar 08 02:13:48 crc kubenswrapper[4762]: I0308 02:13:48.438556 4762 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd"} err="failed to get container status \"74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd\": rpc error: code = NotFound desc = could not find container \"74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd\": container with ID starting with 74ecbf18af805c97e1050d272e3efb04d0f979a533f9d4db85ea661b934f25cd not found: ID does not exist" Mar 08 02:13:49 crc kubenswrapper[4762]: I0308 02:13:49.286908 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" path="/var/lib/kubelet/pods/090d4e29-48e4-41f9-b696-79bcca15cc5d/volumes" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.178693 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548934-gk9qm"] Mar 08 02:14:00 crc kubenswrapper[4762]: E0308 02:14:00.179804 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="registry-server" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.179820 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="registry-server" Mar 08 02:14:00 crc kubenswrapper[4762]: E0308 02:14:00.179850 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="extract-utilities" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.179858 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="extract-utilities" Mar 08 02:14:00 crc kubenswrapper[4762]: E0308 02:14:00.179880 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="extract-content" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.179889 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="extract-content" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.180177 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="090d4e29-48e4-41f9-b696-79bcca15cc5d" containerName="registry-server" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.181090 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.185939 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.186320 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.186507 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.210181 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548934-gk9qm"] Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.219498 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl5ss\" (UniqueName: \"kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss\") pod \"auto-csr-approver-29548934-gk9qm\" (UID: \"34392ef6-d32c-4141-82e3-94cc6f0682e2\") " pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.322993 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl5ss\" (UniqueName: \"kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss\") pod \"auto-csr-approver-29548934-gk9qm\" (UID: \"34392ef6-d32c-4141-82e3-94cc6f0682e2\") " pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.346449 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl5ss\" (UniqueName: \"kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss\") pod \"auto-csr-approver-29548934-gk9qm\" (UID: \"34392ef6-d32c-4141-82e3-94cc6f0682e2\") " pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:00 crc kubenswrapper[4762]: I0308 02:14:00.511340 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:01 crc kubenswrapper[4762]: I0308 02:14:01.107590 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548934-gk9qm"] Mar 08 02:14:01 crc kubenswrapper[4762]: I0308 02:14:01.500861 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" event={"ID":"34392ef6-d32c-4141-82e3-94cc6f0682e2","Type":"ContainerStarted","Data":"2bddafc4b823b679caafe4c3dc8c02c49f9498775b2c1c07610ebc4d45f544c8"} Mar 08 02:14:02 crc kubenswrapper[4762]: I0308 02:14:02.511982 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" event={"ID":"34392ef6-d32c-4141-82e3-94cc6f0682e2","Type":"ContainerStarted","Data":"07e409d860d4edb71801e5ad3d5c8fc4d58183a7ba2d87386f20b41adb855675"} Mar 08 02:14:02 crc kubenswrapper[4762]: I0308 02:14:02.539529 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" podStartSLOduration=1.754362097 podStartE2EDuration="2.539511359s" podCreationTimestamp="2026-03-08 02:14:00 +0000 UTC" firstStartedPulling="2026-03-08 02:14:01.097921114 +0000 UTC m=+6662.572065498" lastFinishedPulling="2026-03-08 02:14:01.883070386 +0000 UTC m=+6663.357214760" observedRunningTime="2026-03-08 02:14:02.524582367 +0000 UTC m=+6663.998726731" watchObservedRunningTime="2026-03-08 02:14:02.539511359 +0000 UTC m=+6664.013655703" Mar 08 02:14:04 crc kubenswrapper[4762]: I0308 02:14:04.538601 4762 generic.go:334] "Generic (PLEG): container finished" podID="34392ef6-d32c-4141-82e3-94cc6f0682e2" containerID="07e409d860d4edb71801e5ad3d5c8fc4d58183a7ba2d87386f20b41adb855675" exitCode=0 Mar 08 02:14:04 crc kubenswrapper[4762]: I0308 02:14:04.539053 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" event={"ID":"34392ef6-d32c-4141-82e3-94cc6f0682e2","Type":"ContainerDied","Data":"07e409d860d4edb71801e5ad3d5c8fc4d58183a7ba2d87386f20b41adb855675"} Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.051840 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.094802 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl5ss\" (UniqueName: \"kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss\") pod \"34392ef6-d32c-4141-82e3-94cc6f0682e2\" (UID: \"34392ef6-d32c-4141-82e3-94cc6f0682e2\") " Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.102953 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss" (OuterVolumeSpecName: "kube-api-access-jl5ss") pod "34392ef6-d32c-4141-82e3-94cc6f0682e2" (UID: "34392ef6-d32c-4141-82e3-94cc6f0682e2"). InnerVolumeSpecName "kube-api-access-jl5ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.198293 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl5ss\" (UniqueName: \"kubernetes.io/projected/34392ef6-d32c-4141-82e3-94cc6f0682e2-kube-api-access-jl5ss\") on node \"crc\" DevicePath \"\"" Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.574929 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" event={"ID":"34392ef6-d32c-4141-82e3-94cc6f0682e2","Type":"ContainerDied","Data":"2bddafc4b823b679caafe4c3dc8c02c49f9498775b2c1c07610ebc4d45f544c8"} Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.574977 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bddafc4b823b679caafe4c3dc8c02c49f9498775b2c1c07610ebc4d45f544c8" Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.575027 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548934-gk9qm" Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.645308 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548928-z5kfc"] Mar 08 02:14:06 crc kubenswrapper[4762]: I0308 02:14:06.664049 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548928-z5kfc"] Mar 08 02:14:07 crc kubenswrapper[4762]: I0308 02:14:07.282294 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97613940-eb08-4bcb-9e99-08dd386e6843" path="/var/lib/kubelet/pods/97613940-eb08-4bcb-9e99-08dd386e6843/volumes" Mar 08 02:14:49 crc kubenswrapper[4762]: I0308 02:14:49.142352 4762 scope.go:117] "RemoveContainer" containerID="06e428f27203e4309da9073bf6df2765a76d74757a4c5c7133f7d560919785c4" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.162068 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp"] Mar 08 02:15:00 crc kubenswrapper[4762]: E0308 02:15:00.163139 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34392ef6-d32c-4141-82e3-94cc6f0682e2" containerName="oc" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.163249 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="34392ef6-d32c-4141-82e3-94cc6f0682e2" containerName="oc" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.163551 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="34392ef6-d32c-4141-82e3-94cc6f0682e2" containerName="oc" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.164552 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.167070 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.167080 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.175811 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp"] Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.186353 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748gn\" (UniqueName: \"kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.186533 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.186576 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.289147 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748gn\" (UniqueName: \"kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.289721 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.290091 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.291630 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.295161 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.304686 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748gn\" (UniqueName: \"kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn\") pod \"collect-profiles-29548935-hrzfp\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.488696 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:00 crc kubenswrapper[4762]: I0308 02:15:00.993125 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp"] Mar 08 02:15:00 crc kubenswrapper[4762]: W0308 02:15:00.993578 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95a6ba18_a4c9_4f5a_80e7_77d96feb621e.slice/crio-1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d WatchSource:0}: Error finding container 1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d: Status 404 returned error can't find the container with id 1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d Mar 08 02:15:01 crc kubenswrapper[4762]: I0308 02:15:01.556926 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" event={"ID":"95a6ba18-a4c9-4f5a-80e7-77d96feb621e","Type":"ContainerStarted","Data":"e252aa82a0c8ea84671470bde5a07f35101f211edff678b3c41300952a8f8331"} Mar 08 02:15:01 crc kubenswrapper[4762]: I0308 02:15:01.556994 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" event={"ID":"95a6ba18-a4c9-4f5a-80e7-77d96feb621e","Type":"ContainerStarted","Data":"1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d"} Mar 08 02:15:01 crc kubenswrapper[4762]: I0308 02:15:01.591901 4762 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" podStartSLOduration=1.591873338 podStartE2EDuration="1.591873338s" podCreationTimestamp="2026-03-08 02:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-08 02:15:01.574869272 +0000 UTC m=+6723.049013686" watchObservedRunningTime="2026-03-08 02:15:01.591873338 +0000 UTC m=+6723.066017722" Mar 08 02:15:02 crc kubenswrapper[4762]: I0308 02:15:02.614933 4762 generic.go:334] "Generic (PLEG): container finished" podID="95a6ba18-a4c9-4f5a-80e7-77d96feb621e" containerID="e252aa82a0c8ea84671470bde5a07f35101f211edff678b3c41300952a8f8331" exitCode=0 Mar 08 02:15:02 crc kubenswrapper[4762]: I0308 02:15:02.615281 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" event={"ID":"95a6ba18-a4c9-4f5a-80e7-77d96feb621e","Type":"ContainerDied","Data":"e252aa82a0c8ea84671470bde5a07f35101f211edff678b3c41300952a8f8331"} Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.154899 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.203158 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume\") pod \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.203296 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume\") pod \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.204179 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume" (OuterVolumeSpecName: "config-volume") pod "95a6ba18-a4c9-4f5a-80e7-77d96feb621e" (UID: "95a6ba18-a4c9-4f5a-80e7-77d96feb621e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.211999 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "95a6ba18-a4c9-4f5a-80e7-77d96feb621e" (UID: "95a6ba18-a4c9-4f5a-80e7-77d96feb621e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.306013 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748gn\" (UniqueName: \"kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn\") pod \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\" (UID: \"95a6ba18-a4c9-4f5a-80e7-77d96feb621e\") " Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.307497 4762 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-config-volume\") on node \"crc\" DevicePath \"\"" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.307531 4762 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.313783 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn" (OuterVolumeSpecName: "kube-api-access-748gn") pod "95a6ba18-a4c9-4f5a-80e7-77d96feb621e" (UID: "95a6ba18-a4c9-4f5a-80e7-77d96feb621e"). InnerVolumeSpecName "kube-api-access-748gn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.409255 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748gn\" (UniqueName: \"kubernetes.io/projected/95a6ba18-a4c9-4f5a-80e7-77d96feb621e-kube-api-access-748gn\") on node \"crc\" DevicePath \"\"" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.649914 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" event={"ID":"95a6ba18-a4c9-4f5a-80e7-77d96feb621e","Type":"ContainerDied","Data":"1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d"} Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.649982 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d8a87459f496fec7254d94b3b3eda709845532f3895ef7f5e16ddf2d5a1bd5d" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.650096 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29548935-hrzfp" Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.689311 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl"] Mar 08 02:15:04 crc kubenswrapper[4762]: I0308 02:15:04.700698 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29548890-v7nzl"] Mar 08 02:15:05 crc kubenswrapper[4762]: I0308 02:15:05.292707 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a2ce822-2dad-4840-ae9a-886c62dd392f" path="/var/lib/kubelet/pods/2a2ce822-2dad-4840-ae9a-886c62dd392f/volumes" Mar 08 02:15:42 crc kubenswrapper[4762]: I0308 02:15:42.852152 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:15:42 crc kubenswrapper[4762]: I0308 02:15:42.853028 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:15:49 crc kubenswrapper[4762]: I0308 02:15:49.263213 4762 scope.go:117] "RemoveContainer" containerID="cedf3a90c66aaa613b61faf01cfbaefcce797ca73c0d61488f10301693ea2f4e" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.186603 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548936-5fc89"] Mar 08 02:16:00 crc kubenswrapper[4762]: E0308 02:16:00.187953 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a6ba18-a4c9-4f5a-80e7-77d96feb621e" containerName="collect-profiles" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.187977 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a6ba18-a4c9-4f5a-80e7-77d96feb621e" containerName="collect-profiles" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.188440 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a6ba18-a4c9-4f5a-80e7-77d96feb621e" containerName="collect-profiles" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.189729 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.197301 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.197451 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.197689 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.207173 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548936-5fc89"] Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.372122 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztbx6\" (UniqueName: \"kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6\") pod \"auto-csr-approver-29548936-5fc89\" (UID: \"8e131ec9-36c7-4193-838b-0380574c71f2\") " pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.478691 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztbx6\" (UniqueName: \"kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6\") pod \"auto-csr-approver-29548936-5fc89\" (UID: \"8e131ec9-36c7-4193-838b-0380574c71f2\") " pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.516860 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztbx6\" (UniqueName: \"kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6\") pod \"auto-csr-approver-29548936-5fc89\" (UID: \"8e131ec9-36c7-4193-838b-0380574c71f2\") " pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:00 crc kubenswrapper[4762]: I0308 02:16:00.525392 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:01 crc kubenswrapper[4762]: I0308 02:16:01.207536 4762 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 08 02:16:01 crc kubenswrapper[4762]: I0308 02:16:01.209647 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548936-5fc89"] Mar 08 02:16:01 crc kubenswrapper[4762]: I0308 02:16:01.530693 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548936-5fc89" event={"ID":"8e131ec9-36c7-4193-838b-0380574c71f2","Type":"ContainerStarted","Data":"44e4da954d3c997e5a69bf5c70897f3a71d3a8ffe93a734ed59dc275ec1773f8"} Mar 08 02:16:03 crc kubenswrapper[4762]: I0308 02:16:03.561090 4762 generic.go:334] "Generic (PLEG): container finished" podID="8e131ec9-36c7-4193-838b-0380574c71f2" containerID="18cfc3365d71c2257413310162ef31c4c28bace81b1c0f0f5bb1f80748e947b4" exitCode=0 Mar 08 02:16:03 crc kubenswrapper[4762]: I0308 02:16:03.561212 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548936-5fc89" event={"ID":"8e131ec9-36c7-4193-838b-0380574c71f2","Type":"ContainerDied","Data":"18cfc3365d71c2257413310162ef31c4c28bace81b1c0f0f5bb1f80748e947b4"} Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.025237 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.118926 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztbx6\" (UniqueName: \"kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6\") pod \"8e131ec9-36c7-4193-838b-0380574c71f2\" (UID: \"8e131ec9-36c7-4193-838b-0380574c71f2\") " Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.126419 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6" (OuterVolumeSpecName: "kube-api-access-ztbx6") pod "8e131ec9-36c7-4193-838b-0380574c71f2" (UID: "8e131ec9-36c7-4193-838b-0380574c71f2"). InnerVolumeSpecName "kube-api-access-ztbx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.226059 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztbx6\" (UniqueName: \"kubernetes.io/projected/8e131ec9-36c7-4193-838b-0380574c71f2-kube-api-access-ztbx6\") on node \"crc\" DevicePath \"\"" Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.597121 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548936-5fc89" event={"ID":"8e131ec9-36c7-4193-838b-0380574c71f2","Type":"ContainerDied","Data":"44e4da954d3c997e5a69bf5c70897f3a71d3a8ffe93a734ed59dc275ec1773f8"} Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.597164 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44e4da954d3c997e5a69bf5c70897f3a71d3a8ffe93a734ed59dc275ec1773f8" Mar 08 02:16:05 crc kubenswrapper[4762]: I0308 02:16:05.597220 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548936-5fc89" Mar 08 02:16:06 crc kubenswrapper[4762]: I0308 02:16:06.107913 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548930-szg5g"] Mar 08 02:16:06 crc kubenswrapper[4762]: I0308 02:16:06.120382 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548930-szg5g"] Mar 08 02:16:07 crc kubenswrapper[4762]: I0308 02:16:07.285670 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b788b3ef-6db5-4fa8-85bf-001b2a121c8e" path="/var/lib/kubelet/pods/b788b3ef-6db5-4fa8-85bf-001b2a121c8e/volumes" Mar 08 02:16:12 crc kubenswrapper[4762]: I0308 02:16:12.852262 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:16:12 crc kubenswrapper[4762]: I0308 02:16:12.852951 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:16:42 crc kubenswrapper[4762]: I0308 02:16:42.851658 4762 patch_prober.go:28] interesting pod/machine-config-daemon-bx2x4 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 08 02:16:42 crc kubenswrapper[4762]: I0308 02:16:42.853400 4762 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 08 02:16:42 crc kubenswrapper[4762]: I0308 02:16:42.853560 4762 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" Mar 08 02:16:42 crc kubenswrapper[4762]: I0308 02:16:42.854616 4762 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3"} pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 08 02:16:42 crc kubenswrapper[4762]: I0308 02:16:42.854825 4762 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerName="machine-config-daemon" containerID="cri-o://c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" gracePeriod=600 Mar 08 02:16:42 crc kubenswrapper[4762]: E0308 02:16:42.984490 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:16:43 crc kubenswrapper[4762]: I0308 02:16:43.133785 4762 generic.go:334] "Generic (PLEG): container finished" podID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" exitCode=0 Mar 08 02:16:43 crc kubenswrapper[4762]: I0308 02:16:43.133827 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" event={"ID":"5e384d81-de01-4ab9-b10b-2c9c5b45422c","Type":"ContainerDied","Data":"c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3"} Mar 08 02:16:43 crc kubenswrapper[4762]: I0308 02:16:43.133863 4762 scope.go:117] "RemoveContainer" containerID="c27cb3ea76a81c975d39fac956f11a7f4bc22a19f950230dcd59d64f691cbdf2" Mar 08 02:16:43 crc kubenswrapper[4762]: I0308 02:16:43.134544 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:16:43 crc kubenswrapper[4762]: E0308 02:16:43.134795 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:16:49 crc kubenswrapper[4762]: I0308 02:16:49.378203 4762 scope.go:117] "RemoveContainer" containerID="c559c77a8bc7ff51c58ff10626ab91321c88fb5e3a9fe1a8cc57971a3435706a" Mar 08 02:16:54 crc kubenswrapper[4762]: I0308 02:16:54.264087 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:16:54 crc kubenswrapper[4762]: E0308 02:16:54.265488 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:17:07 crc kubenswrapper[4762]: I0308 02:17:07.263938 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:17:07 crc kubenswrapper[4762]: E0308 02:17:07.265239 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:17:18 crc kubenswrapper[4762]: I0308 02:17:18.264435 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:17:18 crc kubenswrapper[4762]: E0308 02:17:18.265992 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:17:29 crc kubenswrapper[4762]: I0308 02:17:29.270236 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:17:29 crc kubenswrapper[4762]: E0308 02:17:29.270925 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:17:42 crc kubenswrapper[4762]: I0308 02:17:42.263817 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:17:42 crc kubenswrapper[4762]: E0308 02:17:42.264957 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:17:53 crc kubenswrapper[4762]: I0308 02:17:53.265239 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:17:53 crc kubenswrapper[4762]: E0308 02:17:53.265920 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.155239 4762 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29548938-624zg"] Mar 08 02:18:00 crc kubenswrapper[4762]: E0308 02:18:00.156586 4762 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e131ec9-36c7-4193-838b-0380574c71f2" containerName="oc" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.156606 4762 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e131ec9-36c7-4193-838b-0380574c71f2" containerName="oc" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.157054 4762 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e131ec9-36c7-4193-838b-0380574c71f2" containerName="oc" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.158317 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.163156 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.163445 4762 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.163656 4762 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-t9qxk" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.166267 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548938-624zg"] Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.349572 4762 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxbh\" (UniqueName: \"kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh\") pod \"auto-csr-approver-29548938-624zg\" (UID: \"88aace69-d433-4fc8-8b62-5f854030d560\") " pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.452265 4762 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qxbh\" (UniqueName: \"kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh\") pod \"auto-csr-approver-29548938-624zg\" (UID: \"88aace69-d433-4fc8-8b62-5f854030d560\") " pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.503278 4762 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qxbh\" (UniqueName: \"kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh\") pod \"auto-csr-approver-29548938-624zg\" (UID: \"88aace69-d433-4fc8-8b62-5f854030d560\") " pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:00 crc kubenswrapper[4762]: I0308 02:18:00.786588 4762 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:01 crc kubenswrapper[4762]: I0308 02:18:01.327588 4762 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29548938-624zg"] Mar 08 02:18:01 crc kubenswrapper[4762]: W0308 02:18:01.339467 4762 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88aace69_d433_4fc8_8b62_5f854030d560.slice/crio-8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d WatchSource:0}: Error finding container 8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d: Status 404 returned error can't find the container with id 8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d Mar 08 02:18:01 crc kubenswrapper[4762]: I0308 02:18:01.372509 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548938-624zg" event={"ID":"88aace69-d433-4fc8-8b62-5f854030d560","Type":"ContainerStarted","Data":"8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d"} Mar 08 02:18:03 crc kubenswrapper[4762]: I0308 02:18:03.398138 4762 generic.go:334] "Generic (PLEG): container finished" podID="88aace69-d433-4fc8-8b62-5f854030d560" containerID="d2fdb0fecc89f5073a39188bc15cebc9f6a5d3ae27f06e13d234ff2180c30730" exitCode=0 Mar 08 02:18:03 crc kubenswrapper[4762]: I0308 02:18:03.398459 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548938-624zg" event={"ID":"88aace69-d433-4fc8-8b62-5f854030d560","Type":"ContainerDied","Data":"d2fdb0fecc89f5073a39188bc15cebc9f6a5d3ae27f06e13d234ff2180c30730"} Mar 08 02:18:04 crc kubenswrapper[4762]: I0308 02:18:04.263841 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:18:04 crc kubenswrapper[4762]: E0308 02:18:04.264602 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:18:04 crc kubenswrapper[4762]: I0308 02:18:04.932872 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.107456 4762 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qxbh\" (UniqueName: \"kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh\") pod \"88aace69-d433-4fc8-8b62-5f854030d560\" (UID: \"88aace69-d433-4fc8-8b62-5f854030d560\") " Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.116402 4762 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh" (OuterVolumeSpecName: "kube-api-access-4qxbh") pod "88aace69-d433-4fc8-8b62-5f854030d560" (UID: "88aace69-d433-4fc8-8b62-5f854030d560"). InnerVolumeSpecName "kube-api-access-4qxbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.211001 4762 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qxbh\" (UniqueName: \"kubernetes.io/projected/88aace69-d433-4fc8-8b62-5f854030d560-kube-api-access-4qxbh\") on node \"crc\" DevicePath \"\"" Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.443641 4762 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29548938-624zg" Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.443780 4762 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29548938-624zg" event={"ID":"88aace69-d433-4fc8-8b62-5f854030d560","Type":"ContainerDied","Data":"8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d"} Mar 08 02:18:05 crc kubenswrapper[4762]: I0308 02:18:05.443869 4762 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8363b94de8e2375f1e236856bd34c895bc02c90e113c02f747fc2b546efbd12d" Mar 08 02:18:06 crc kubenswrapper[4762]: I0308 02:18:06.068579 4762 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29548932-vms5g"] Mar 08 02:18:06 crc kubenswrapper[4762]: I0308 02:18:06.085117 4762 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29548932-vms5g"] Mar 08 02:18:07 crc kubenswrapper[4762]: I0308 02:18:07.288196 4762 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="981a36e4-a9ba-467d-a3f7-517e67e41f90" path="/var/lib/kubelet/pods/981a36e4-a9ba-467d-a3f7-517e67e41f90/volumes" Mar 08 02:18:15 crc kubenswrapper[4762]: I0308 02:18:15.264437 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:18:15 crc kubenswrapper[4762]: E0308 02:18:15.269454 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:18:28 crc kubenswrapper[4762]: I0308 02:18:28.263938 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:18:28 crc kubenswrapper[4762]: E0308 02:18:28.265035 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:18:41 crc kubenswrapper[4762]: I0308 02:18:41.263985 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:18:41 crc kubenswrapper[4762]: E0308 02:18:41.264859 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:18:49 crc kubenswrapper[4762]: I0308 02:18:49.517919 4762 scope.go:117] "RemoveContainer" containerID="d444daafed913274e332756beb96ef221170cb482de67fd3c9879ee20d566c00" Mar 08 02:18:56 crc kubenswrapper[4762]: I0308 02:18:56.263982 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:18:56 crc kubenswrapper[4762]: E0308 02:18:56.264826 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c" Mar 08 02:19:09 crc kubenswrapper[4762]: I0308 02:19:09.276963 4762 scope.go:117] "RemoveContainer" containerID="c96c5ad4b1a93a55a8c9189c8bf9fd7f461aece7381476c760a1050d62b780a3" Mar 08 02:19:09 crc kubenswrapper[4762]: E0308 02:19:09.278087 4762 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-bx2x4_openshift-machine-config-operator(5e384d81-de01-4ab9-b10b-2c9c5b45422c)\"" pod="openshift-machine-config-operator/machine-config-daemon-bx2x4" podUID="5e384d81-de01-4ab9-b10b-2c9c5b45422c"